Can you predict if a student will drop out? Yes!

predict attrition

Our case study on a Higher Education Institution and the 360-degree student attrition solution we designed and developed. An intelligent way to understand attrition in the past, and using that to predict attrition in the future.

It allows the institution to foster relationships, at-risk students, before the event, ultimately decreasing attrition and avoiding revenue leakage.

 

See more about student attrition here

From operational challenges to a modern, automated and simplified organisation – Our Business SA case study

An Exposé case study around our advanced analytics solution for the ‘voice of business in South Australia’,  Business SA. The solution was an important component of a large digital transformation program that saw Business SA transition to a modern, automated and simplified organization which was underpinned by the following technology changes;

• A cloud-first strategy which reduced Business SA’s dependence on resources that provided no market differentiation
• Simplified the technology landscape with a few core systems which performed specific functions
• Established a modular architecture which is more able to accommodate change
• Implemented a digital strategy to support an automated, self-service and 24/7 service delivery
• Improve data quality through simpler and more intuitive means of data entry and validation
• Utilise the latest desktop productivity tools providing instant mobility capabilities

See the full case study here: Data and Analyticsexposé case study – Business-SA

Cognitive Intelligence meets Advanced Analytics

cognitive intelligence

Acquiring knowledge of anonymous customers through Cognitive Intelligence is the next generation customer based Business Intelligence.

Human behavior and characteristics such as speech, demographics, and emotion can now be expressed digitally and blended that with Advanced Analytics.  Exposé apply this across a number of different use cases as shown in the video.

 

Or navigate to – https://www.youtube.com/watch?v=XkeCLp7noyo

See more on Advanced Analytics

Disrupting the banking market

This video shows a comprehensive solution geared around disruption in the Banking market, from transactions through to advanced analytics.

The viewer meets 3 different customers, their challenges and how the bank responds to them.

The battle of the AMLs – Amazon Machine Learning Vs Azure Machine Learning

As machine learning has become more accessible to businesses and the number of products currently available has risen in the market, the question is regularly asked of us as leaders in data and analytics to recommend, or at least provide insight, into some of those products.  Machine learning has become more accessible and there are many products currently available.  In determining a use case, it was decided to use Amazon Web Services and Microsoft Azure as both are comparable in the market as well as being cloud-based offerings, allowing business a lower setup time and ongoing costs. The reason for selecting Azure and Amazon was based on the familiarity of working with Microsoft and Amazon Web Services products.

Both Microsoft and Amazon can achieve outcomes in different ways. Any perceived strengths and weaknesses highlighted here are based on my personal experience using the products. Do note that there are frequent enhancements carried out by both Microsoft and Amazon.  This document is based on research conducted in September 2016. It is possible that there may have been some changes since the time of writing this.

Use case – The use case is centered around a common retail-related problem.  A business offers services to clients. The sales promotion activities within that business were not targeted to specific groups of clients. This was because the business didn’t have any in-depth knowledge of which services to push to which segment of clients. Below is a comparison of Amazon’s and Microsoft’s Machine Learning solutions based on my findings when trying to solve the above-mentioned use case.

 

Algorithms – Azure Machine Learning allows multiple ways to try and solve a given problem

Azure has a plethora of algorithms to choose from when compared to Amazon which only had one. A simple google search for an Azure Machine Learning cheat sheet will provide you with good guidance on which algorithms would be best for a particular problem. However, Amazon is limited to Logistic regression as confirmed in the Amazon Machine Learning Frequently Asked Questions page (https://aws.amazon.com/machine-learning/faqs/). While Logistic regression has its place, it is only ideal when there is a single decision boundary. Most use cases will require trialing multiple algorithms to achieve a good outcome.

Findings – If you need to predict if a client will make another purchase or not, then logistic regression would be ideal. However, when needing to understand which services some clients are more likely to purchase, more capability from the algorithms are needed to be able to generate a good outcome.  This is where Azure leads the way.

aml_cheatsheetAzure Machine Learning Cheat Sheet
Source: https://azure.microsoft.com/en-us/documentation/articles/machine-learning-algorithm-cheat-sheet/

 

Categorise Data – Amazon Machine Learning categorizes the source data for you

Amazon automatically pre-processes the data and categorizes each field. The possible categories are Categorical, Numeric, Binary or Text. If a user wishes to, they can change the category of the data which may have an effect on the final outcome. If the data contains a row identifier, it can be declared and will be ignored when making predictions.

A binary field is one which can only have one of two values. In Amazon, binary fields can only have one combination of the following (Case insensitive);

  • 0,1
  • Y, N
  • Yes, No
  • True, False.

It is not capable of understanding any other combination even if only two values exist in the field (e.g. paid, unpaid).

In Azure, there is no option to classify data fields. The classification is done automatically. While this is very intuitive, I would prefer to have the ability to manipulate fields as required. It is worth noting that Azure had no difficulty understanding that a field containing only two values was a binary data set. This reduced the amount of data manipulation that was required before creating a model and made the output easier to read.

aws_categorise_1
AWS automatically proposed data categories

Findings – I had a column which had an indicator for purchases made in the morning and afternoon (AM/PM). Amazon wasn’t able to see this as a binary field and forcing it to be a binary field and trying to predict if a service was required in the morning or afternoon caused an error.

 

Source Data – Amazon gives good visibility of the content within the source data

Amazon grouped the fields into the categories identified above. The target field is the field that we are attempting to predict using machine learning. It was possible to view each field and understand the correlation to the target field.  The distribution of categorical data was useful by identifying the top 10 attributes and the number of occurrences. A bar chart helped to understand the distribution of attributes in each field.

aws_source_data_visualisation
AWS source data visualization of a single field

Azure too has the ability to show each field and the distribution of attributes within it. It only showed the top 10 attributes for each field and therefore it was not easy to understand the proportion of the data that did not make the top 10. The correlation of a field to the target was not shown in the example that I worked on. This is most probably due to the availability of multiple algorithms. The correlation of a field to the target field would be different for each algorithm.

AML_Source_Data_visualisation.png
Azure source data visualization

Findings – There were more than ten types of popular services. I was unable to understand if the services that didn’t make the top ten was significant or not in Azure. This was crystal clear on Amazon because it created a grouping of all services which were not in the top ten on the chart and displayed it as ‘Others’.

 

Training a Model – Amazon has a fixed 70/30 split when training a model; Azure allows you to select your desired split

In Amazon, the data split between the training and evaluation dataset was fixed at 70% for training and 30% for evaluating. The only method of changing this was to use a different data set for evaluating the data. Based on your use case you may need a different split. In Amazon, you need to do this by manually creating training and learning datasets.

In Azure, it was possible to specify the desired split between the data available for training and scoring the model. I would consider this to be an essential feature as the split would need to change based on the problem that you are trying to solve.

AWS_70_30_split.pngAmazon can only do a 70/30 split

Findings – A 70/30 split didn’t cause an issue. However, the ability to trial different splits in Azure was useful for me to understand its impact.

 

Evaluating a Model – Amazon automatically evaluates the machine learning model

Evaluation happens automatically in Amazon; it occurs immediately after the Machine Learning model is created. In a binary classification scenario, Amazon lets the user change the trade-off, False positive rate, Precision, Recall, and Accuracy. Each of these attributes has an effect on all the other attributes. It is easy to visualize the outcome immediately by tweaking them.

In Azure, evaluation can be added to the experiment as necessary. In a binary classification scenario, it was only possible to change the trade-off threshold which in turn impacts other factors.

aws_advanced_metricsAmazon lets you change all metrics

 

Findings – I would consider it to be critical that a machine learning model is evaluated while in development. Doing some manipulation in the data improved the quality of the model. Many iterations are usually required before landing on a good model.

 

Predictions – Making predictions using Machine Learning

Both Amazon and Azure have options to manually test, batch process and create endpoints for real-time predictions. Batch processing is a more common method of creating predictions from machine learning. It is usually done using a large set of records which usually takes some time to process.

Testing predictions on Amazon – The easiest way to test a model is to manually enter some values and get a prediction. This can be done by either typing in the values into a web form or pasting values separated by commas. The predicted label is displayed on the screen but to see the confidence of the prediction, you need to filter through some code. An excerpt of an example output is below;

       “PredictiveModelType”: “BINARY”
},
“predictedLabel”: “1”,
“predictedScores”: {
“1”: 0.7453358769416809
}

Amazon batch predictions – To do this, you first need to convert the data to a CSV (Comma Separated Values) format and upload it to Amazon S3 (Simple Storage Service). Predictions can be created based on this file and will be output to another CSV file in S3. The output file was compressed and saved as a GZ file which is commonly used in Unix environments. A drawback in the output received was that there was no way of finding out which row of the source data matched the output. The only way that I found to get around this was to add a row number column when creating the machine learning model. The row number appeared in the predictions. To marry up the two, you would need to use a method such as VLOOKUP which was a bit frustrating. Trial and error proved that the rows in the output were in the same sequence of the rows in the source file.

Amazon endpoint – An endpoint creates an Amazon web service which can be accessed via an API (Application Programming Interface). This is useful when there is a requirement to get predictions on a real-time or ad-hoc basis. Amazon claims that a query will be responded to in 100 milliseconds. They also claim to be able to process up to 200 queries per second. Any extra queries will be queued up and responded to. Higher capacities can be accommodated by contacting Amazon.

Testing predictions in Azure – You are presented with a test option where you can type in data to a web form. There was no option to simply paste comma separated values into the webpage like on Amazon. However, the option to download a customized Excel workbook was very intuitive. This workbook consisted of parameters and predicted values side by side. It was very easy to use and responses were received within a second. This method is great if the data set has a small number of fields but may become progressively harder to use if there are many fields.

Azure batch predictions – Similar to the test option, this option also made use of an Excel workbook. However, it was more flexible allowing you to pick a cell range for the input and output data. I set up my output data on another sheet in the same workbook. I tested 300 rows of predictions and got the results back within one second.

Endpoint in Azure – The endpoint in Azure appeared much more user-friendly. There was an API help page which had sample code for a request and response using the dataset which you are working on. It also contained sample code for C3, Python and R which would reduce the complexities of having to write code from scratch.

AML_Batch.pngAzure batch predictions in Excel

Findings – In my use case, I didn’t require real-time predictions. I used batch predictions and it was a hassle to have to manually marry up the original data with the predictions from Amazon.

 

Costs – Azure’s better features and user interface costs more. Up to five times more!

Please note that pricing may vary based on many factors. These include but are not limited to the region, the complexity of the solution, the computing tier is chosen, the size/nature of your organization and any other negotiations entered into with either Microsoft or Amazon.

I performed some high-level cost calculations on the cost based on the dataset used. For an Amazon solution, it costs approximately USD 100 per month when using 20 hours of computing time and 890,000 predictions. Real-time predictions cost more than batch predictions. However, the difference was not significant for the model that I used (USD 104.84 for real-time vs USD 97.40 for batch predictions).

The same solution using Azure came up to an estimated total cost of just under USD 500.

The prices are my observations only and should always be confirmed with the license provider. For more information on pricing, please see the links below;

Amazon – https://aws.amazon.com/machine-learning/pricing/

Azure – https://azure.microsoft.com/en-us/pricing/details/machine-learning/

 

Audience – Amazon appears to be more focused towards technical people

Overall, Amazon appeared to be focused on users who are more technically minded and more comfortable with programming. When learning to use Amazon Machine Learning, I only came across one example and therefore, I would consider this to be very limited.

On the other hand, Azure Machine Learning appeared more suited for power users within a business, as well as the technically minded. It provides a more familiar graphical drag-and-drop interface. These components are pre-programmed and are grouped such that they are easy to understand. The examples provided are a great way of getting accustomed to the tool and there are good sample datasets available.

 

Azure would appeal to most users

If you are more into coding and you have technical resources to support you and you can manage with only using the Logistic Regression algorithm, then Amazon would be ideal for you. With its lower cost and range of other cloud services, it is definitely worth considering. For the rest of us, Azure is clearly the best choice.

It is worth remembering that the many algorithms and ease of use comes with a price tag attached to it.

Findings – I almost felt spoilt for choice with the number of algorithms in Azure. I spent a lot of time trying to get a good model working on Amazon. The process was much easier when I used the same dataset in Azure.

In conclusion, I was able to gain an understanding of which groups of clients are most inclined to purchase certain services using Azure. This makes it possible to target specific clients and make the best use of the marketing budget. The ability to easily view the predictions along with the fields used for the predictions was very helpful. The predictions will be used to target clients who are more likely to purchase the services. In turn, this will create a better outcome for both the business and its’s clients. Overall, Azure proved to be a more mature offering for someone who wanted to solve a particular use case and trying to avoid deep coding.

We will keep a close eye on the Amazon offering to see when it catches up.

From water cooler discussion to corporate Data Analytics in record time

Power BI

For those who have been involved in conventional Business Intelligence projects, you will be all too familiar with the likely contiguous chain of events and the likely outcome. It typically goes something like this: The idea is incubated by someone (very often this would be within ICT), a business case for a project is written, someone holding the purse strings approves it, suitable business and functional requirements are gathered-, written- and approved, the solution is architected and developed-, tested- and signed off, the solution is implemented, and voila, the business MIGHT use the solution.

These conventional solutions are problematic for a number of reasons:

Requirements are lukewarm at best – If the idea was not incubated by someone experiencing a real business pain point, or wanting to exploit a real opportunity, then the “requirements” gathered are always going to be skewed towards “what do you think you need” rather than “what do you know you need right now”.

Suboptimal outcome – Lukewarm requirements will lead to solutions that do not necessarily add value to the business as they are not based on real pain points or specific opportunities that need to be exploited. So when the formal SDLC process conclude the business may, in the spirit of trying to contribute to successful project outcomes, try to use the solution as is, or send it back for rework so that the solution at least satisfies some of their needs.

Low take-up rate – accepting a suboptimal solution so as to be a good team player, over time, usage will drop off as it’s not really serving a real need.

Costly rework and an expensive project – these issues means that the business often gets to see a final product late in the project lifecycle and often only then start thinking about what it could do for them if changes are applied to meet their real needs. The solution is sent back for changes, and that is very costly.

So what has changed?

Technology has finally caught up with what business users have been doing for a long time.

For many years, given the costly and problematic outcomes of conventional BI, users have often preferred access to the data they need, rather than fancy reports or analytics. They would simply download the data into Excel and create their own really useful BI (see related blog here https://exposedata.wordpress.com/2016/07/02/power-bi-and-microsoft-azure-whats-all-the-fuss-about/ ). This and the advent of a plethora of services in the form of Platform-, Infrastructure- and Software as a Service (PaaS, IaaS and SaaS), and more recently BI specific services such as Data Warehouse as a Service (DWaaS), are proving to be highly disruptive in the Analytics market, and true game changers.

How have this changed things?

In the end, the basic outcome is still the same. Getting data from some kind of source to the business in a format that they can consume by converting it to useful information.

But technology and clever architectures now allow for fast response to red-hot requirements. This is game-changing as solutions are now nimble and responsive and can, therefore, respond to requirements often discussed informally when pain points and/ or opportunities are highlighted in the course of the normal business day such as in meetings, over drinks with colleagues and at the “watercooler”. The trick is to recognize these “requirements” and to relate them back to the opportunities that the new world of data and analytics provide. If that can be done, then the solutions typically respond much better to these organic requirements vs. solutions that respond to requirements incubated and elicited in a much more formal way.

What are water cooler discussions?

I use this term to describe informal discussions around the organization about pain points or opportunities in the business. These pain points or opportunities represent organic requirements that should be responded to fast if they hold real value. This DOES NOT mean that formal requirements are no longer valid, not at all, but it means that we need to recognize that real requirements manifest itself in informal ways too. Here is an example of how a requirement can originate in a formal and in an informal way:

The problem

City planning realize that there are issues with parking availability as they receive 100s of calls each month from irate commuters stuck driving around looking for parking. It seems as if commuters are abandoning the city in favor of suburban shopping centers where ample parking is provided. This is not good for businesses in the city center, and not good for the city council.

Formal

The head of planning realizes that information (in the form of data) will be key to any of his decisions to deal with the problem, so requests reports on traffic volumes, finances and works management planning. It seems as if this data is not in the data warehouse, so a business analyst is employed to elicit the requirements around the reports required, and so the long and costly process starts.

Watercooler

The Head of Planning tells a colleague that he wishes he could expedite getting his hands on the information he needs but he has a limited budget so he cannot employ more resources to move his reports along quicker.

This is overheard by the Data Analytics consultant who realizes that in order to maximize supporting such an important decision, the Head of Planning will have to look at the issue from multiple angles which will likely not be provided by such formal reports. The data he needs, I.e. traffic volumes, finances and works management planning must be blended with other contextual data such as weather, events, date and time of day.

The consultant knows that:

  • The city already holds traffic volumes, finance date and works management planning across fragmented source systems.
  • The city already collects millions of sensor data per day – parking, traffic flow, commuter flow.
  • There are heaps of contextual data out there which is easy to access – weather, events, city businesses financial results, employment figures, etc.
  • The city already has a cloud subscription where services such as IaaS, but especially PaaS and SaaS and DWaaS can quickly be added and configured so as to allow for the collection, blending, storage, processing of data at a fraction of the cost of achieving the same on-premise.
  • That the cloud subscription allows for data science and predictive analytical activities to complement the collection, blending, storage, processing of data.

He calls a meeting with the Head of Planning who is intrigued with the idea and the quick return on investment (ROI) at a fraction of the cost, and commissions the consultant to provide a proof of concept (POC) on the matter.

From water cooler discussion to the solution in record time

In my example (which is based one of our real-world examples using Microsoft Azure) the city has an existing investment in a cloud service. Also, note that I provide a high-level resource mapping of the POC and solution at the end of this blog for both Amazon Web Services (AWS)® and Microsoft Azure®.

  1. The process starts with a POC. Either in a free trial subscription or in the customer’s existing cloud provider such as AWS or Azure.
  2. The preference is to keep the solution server-less and only opt for IaaS where resources cannot be provided as PaaS or SaaS.
  3. IoT parking data form the basis of the solution and both real-time flows plus history is required.
  4. Weather data, traffic flows into and around the city, events, and time of day will help add important context when predicting times of parking through peaks and troughs.
  5. Organisational works management planning data will further enhance better parking planning.
  6. Whilst business financial results show the impact, and more importantly lost opportunity cost on businesses if people abandoned the city in favor of suburban shopping centers.
  7. The processing and storage of the sheer volume of data are achieved at a fraction of the cost than previously envisaged by the business.
  8. The resulting solution is not a replacement for any corporate data warehouse, but complementary to it. Any existing data repository can be viewed as additional and useful contextual data in this new data analytics landscape.

The POC architecture involves the following resources (both Azure and AWS are shown):

Table
Workloads and vendor components

Please note that the diagram below by no means implies a detailed design, but is a true representation of the high-level architecture we used to achieve the specific solution.

Watercooler arch
High-Level Architecture

Data flow patterns

  • A: IOT – Sensor originated real-time data;

o   A1: Into storage;

o   A2: Into predictive analytics where it is blended with B and C;

o   A3: Directly into real-time visualizations;

  • B: Additional contextual data from publicly available sources such as weather, events, business financial performance, into storage;

o   B1: Into predictive analytics where it is blended with A and C;

  • C: On-premise data such as works management, into storage;

o   C1: Into predictive analytics where it is blended with A and B;

  • D: Predictive analytical results into real-time visualizations, and also to storage for historical reporting;
  • E: Massive parallel processing, scalability and on-demand compute where and when required and supporting visual reporting;

Outcome

The result of such a real-world example POC was the realization by the business that very deep insights can be achieved by leveraging the appropriate data wherever it exists and by cleverly architecting solutions with components and services within easy reach, superior outcomes can be achieved fast.

The building blocks created in the POC was adopted and extended into a full production solution and it set the direction for future data analytical workloads.

Strategic Segmentation

Preface

Market segmentation is a common practice in marketing and sales in order to better understand – and therefore be better able to target – customers. This same principle, though, can be applied to any business problem where the division of a diverse population into sub-populations based on similarities (or differences) would be advantageous.

Fortunately, rather than having to slice every variable an infinite amount of ways, we can utilise unsupervised learning algorithms to produce groupings of samples based on the similarity of data features.

This post will discuss the application of a technique to accomplish this, using a mixture of technologies (SQL, Python and R) and algorithms (Self Organising Maps (SOM) and Hierarchical Clustering)

The Goal

The following example is taken from a pool of insurance claims, with the aim to understand sub-populations contained within the data to allow for appropriate monitoring and exception reporting. Specifically, we want to know if the mix of claims changes for the worse.

Claims by nature have a structure whereby a majority of the portfolio consists of low-value claims, reducing quickly to very few at the highest values (for the statistically minded, the value generally follows a log-normal distribution). Typical partitioning strategies are to slice by claim duration (shorter claims are in general cheaper), and this is suitable in a lot of cases when dealing with claims in aggregate. But given the lack of granularity, the challenge is that when there is a change in expected durations, the second level of analysis as to the “who” is required.

By using unsupervised learning, we will essentially encode the “who” into the cluster exemplars, so we can then focus on the more important question: “So what are we going to do about it?”

Approach

Obtaining the data was, fortunately, a trivial task, as it was all contained in an on-premises SQL Server database, and the data was as clean as it was going to get. 5 years of historical records were used as the training sample.

Execution of the modelling was done using the R Kohonen package, and subsequent clustering of the SOM model by the hclust function.

Finally, to glue everything together into a processing pipeline I used Python 2.7 with:

  • sqlalchemy: to connect to the database;
  • numpy / pandas: for data massaging;
  • rpy2: to connect to R.

Algorithm Overview

I will not go into detail around the technical implementations of SOM and Hierarchical Clustering, there are far better explanations out there than I could hope to provide (in Tan 2006, for example). However, to provide a simple overview, a SOM is an m × n grid of nodes, to which samples are assigned to based on the similarity (or dissimilarity) measure used; commonly, and in our case, this is Euclidean distance.

mapping
Figure 1. A 10×10 Hexagonal SOM Grid

It is an iterative algorithm with random initialisation, and at each step, the node codebook (a vector identical in structure as the input data, with feature values that represent the samples assigned to the node) is updated and the samples re-evaluated as to which node they belong. This continues for a set number of iterations, however, we can check to see if the codebooks are changing to any degree using a ‘Change Plot’. Figure 3 is an example of the ‘Change Plot’ from the Kohonen package, where we observe that after 50 or so iterations, there is a minimal change occurring.

changes
Figure 1. A 10×10 Hexagonal SOM Grid

When we have a SOM model we are happy with, Hierarchical Clustering allows us to condense the grid into a smaller number of clusters for further evaluation. Clustering is performed over the node codebooks, after which a number of clusters are selected. Note that one can use the nodes in the SOM as the clustering, but generally, based on the number of samples you train the model with, a SOM grid contains too many nodes to be useful.

clustering
Figure 1. A 10×10 Hexagonal SOM Grid

Modelling Process

The process by which the modelling was undertaken is depicted in figure 4.

process
Figure 4. High-Level Modelling Process

As can be seen from figure 4, the Python script initiates everything and uses the results from the SQL and R calls.

This approach was taken primarily due to my own limitations in R programming; I am far more comfortable developing in Python and know my Python code will be more efficient than my R code purely due to proficiency.

My selection of R and the Kohonen package was based on my research into a suitable implementation of SOM to use. kohonen has an implementation called SuperSOM, which is a multi-layered SOM that gets trains each layer separately, that I thought would be ideal for temporal features in the source data (e.g. layer one = features at t1, layer 2 = features at t2 etc).

Finally, the data set was not big by any stretch so whilst training a SOM can be a compute-intensive task, in this case, anything more than a decent laptop was not required – on my i7 laptop, training of the SuperSOM took only 3 seconds to run 80 iterations against ~5,000 samples.

Model Evaluation

Both Quantisation Error (QE) and Topological Error (TE) were used to evaluate the quality of the model where;

  • QE = the mean distance from node samples to the node codebook. A low QE ensures that the node codebook is close to the samples allocated to it.
  • TE = the mean distance to a node’s nearest node. A low TE means that similar nodes are positioned close to each other on the grid.

TE is particularly imported in our case as we wish to cluster the result. Non-contiguous clusters are a side effect of having a relatively high TE.

Result

The clustering was used to create data exemplars around which monitoring was developed to understand and changes in the population mix. Interventions could be focused on the relevant sub-population based on the dominant features of the samples in that group (ie if group “A” is a high-cost group, and feature “X” was particularly dominant how can we influence this feature to move the sample into a lower cost group). It is for this targeting reason that marketing has been a large user of clustering historically.

New samples can be allocated to a cluster by find first allocating them to the nearest node, then assigned the cluster id of the node.

Other Use Cases

Marketing: To group customers based on buying habits, product type purchases, purchase value etc to tailor marketing activities and/or focus on higher value customers.

Customer Service: Triage call types based on historical interactions into high/medium/low priority or risk.

Government: Understanding the demographics of users of services in order to better tailor customer experience

Technical Notes

  1. SOM uses random initialisation, therefore to get repeatable results a seed needs to be set;
  2. Min-max scaling between 0 and 1 was used given the presence of binary variables;
  3. In a single som scenario, one can just do dist(som$codes) to get a distance matrix for clustering purposes, but for superstorm, we need to handle the layers. So a function was created to calculate the average weighted distance across layers to output as the distance matrix. The weighting was based on layer weight and also node distance (ie the further the node away from each other the higher the weighting) to form contiguous clusters.

References

A good segmentation tutorial using the Kohonen package: http://www.r-bloggers.com/self-organising-maps-for-customer-segmentation-using-r

kohonen package official documentation: https://cran.r-project.org/web/packages/kohonen/kohonen.pdf

Wehrens and Buydens: Self- and Super-organizing Maps in R: The Kohonen Package, Journal of Statistical Software, October 2007, Volume 21, Issue 5: https://www.jstatsoft.org/article/view/v021i05/v21i05.pdf

Fusco and Perez: Spatial Analysis of the Indian Subcontinent: the Complexity Investigated through Neural Networks, Proceedings of CUPUM 2015: http://web.mit.edu/cron/project/CUPUM2015/proceedings/Content/analytics/287_fusco_h.pdf