Artificial Intelligence in Aged Care – June’s story

Meet June; long time Adelaidean, keen gardener and grandmother of twelve!  At 86 years ‘young’, June moved from her own home into a local aged care facility following a series of falls that saw her hospitalised over the summer.  June was diagnosed with Parkinson’s disease 18 months ago and following an increasing number of falls, June and her family made the decision to move her into residential care.

As symptoms of Parkinson’s disease progress at different rates for different people, getting June’s treatment plan right has been tricky, complicated by the fact that like many aged care residents, she requires several different medications to manage her health.  June and her carers have noticed that her tremors appear to be triggered by stress or emotional experiences and lessen when she is relaxed.  It also appears that regular exercise and engagement in leisure activities aid in keeping June’s tremors at bay.  As tremors often lead to lack of balance, which is likely to result in a fall, June’s care team have put together a robust healthcare plan which includes regular activity and time spent outdoors on top of her medication and occupational therapy.

The aged care facility where June lives recently embarked upon an initiative with the goal of improving the overall response to incidents such as falls, ensuring that responses are timely and that any incidents are attended to by the correct staff.  CCTV cameras have been installed in the corridors on the higher dependency floors, such as the one June lives on.  The CCTV is used to track residents’ movements via location tracking as well as emotions via facial recognition.  Residents of these sections have also been given smart devices to wear that track real-time data such as number of steps taken, standing vs walking rate and heart rate.

When dealing with personal data, it is of paramount importance to ensure its security.  Additional precautionary measures will be taken to ensure the security of June’s personal data so that it will be accessed for authorised purposes only.  Steps need to be taken so that June’s personal data is not shared or used for any commercial gain, for example, as a way to categorise June, possibly affecting her insurance premiums based on her risk as a patient.

Given the knowledge we have around the impact of stress on the incidence of tremors, the data from the CCTV coupled with June’s smart device will trigger an alert to the team lead in charge of her zone, should the variables compute to show an increased likelihood of stress.  The team lead is then able to ensure not only that there are sufficient carers positioned in high risk zones, that they are also equipped to deal with a possible fall. Furthermore, the wearable device shows the care team when June is outside and how much sunlight – linked to positive mental health – June is getting.  The data also enables the team to see links between steps and heart rate.  If it is found, as an example, that steps are going down and heart rate increasing, this could be a sign of a potential health issue, which would enable the appropriate medical intervention to happen proactively.

This scenario illustrates a proactive solution that benefits June and other residents in terms of the level of care they receive, not only through better response to incidents but in helping to prevent incidents happening in the first place.  At an organisational level, management also get insights that assist them in planning and resourcing more effectively as well as the ongoing process improvements brought about by machine learning.

Stay tuned for a follow up instalment as we explore the technical aspects of the business case!

Author: Sophia Siegele; Contributor: Shishir Sarfare

Introducing our bot analytics solution – a key part of your Artificial intelligence story

Introducing our bot analytics solution – a visual tool that provides a deep understanding of the conversations users are having with your bot. And as bots are becoming key components of Artificial Intelligence solutions, this analytical solution will help you ensure your bot performs accurately.

Please see a more comprehensive brochure here – Bot Analytics Solution

 

Artificial Intelligence and Occupational Health and Safety – AI an enabler or a threat

We increasingly hear statements like, “machines are smarter than us” and “they will take over our jobs”. The fact of the matter is that computers can simply compute faster, and more accurately than humans can. So, in the short video below, we instead focus on how machines can be used to assist us do our jobs better, rather than viewing AI as an imminent threat. It shows how AI can assist in better occupational health and safety in the hospitality industry. It does however apply to many use cases across many industries, and positions AI as an enabler. Also see an extended description of the solution after the video demo.

Image and video recognition – a new dimension of data analytics

With the introduction of video, image and video streaming analytics, the realm of advanced data analytics and artificial intelligence just stepped up a notch.

All the big players are currently competing to provide the best and most powerful versions;   Microsoft with Azure Cognitive Services APIs, Amazon with AWS Rekognition, Google Cloud Video Intelligence as well as IBM with Intelligent Video Analytics.

Not only can we analyse textual or numerical data historically or in real time, we’re now able to extend this to use cases of videos and images. Currently, there are API’s available to carry out these conceptual tasks:

  • Face Detection

o   Identify a person from a repository / collection of faces

o   Celebrity recognition

  • Facial Analysis

o   Identify emotion, age, and other demographics within individual faces

  • Object, Scene and Activity Detection

o   Return objects the algorithm has identified within specific frames i.e. cars, hats, animals

o   Return location settings i.e. kitchen, beach, mountain

o   Return activities from video frame i.e. riding, cycling, swimming

  • Tracking

o   Track movement/path of people within a video

  • Unsafe Content Detection

o   Auto moderate inappropriate content i.e. Adult only content

  • Text Detection

o   Recognise text from images

The business benefits

Thanks to cloud computing, this complex and resource demanding functionality can be used with relative ease by businesses.  Instead of having to develop complex systems and processes to accomplish such tasks, a business can now leverage the intelligence and immense processing power of cloud products, freeing them up to focus on how best to apply the output.

In a nutshell, vendors offering video and image services are essentially providing users API’s which can interact with the several located cloud hosts they maintain globally. All the user needs to do, therefore, is provide the input and manage the responses provided by the many calls that can be made using the provided API’s. The exposé team currently have the required skills and capability to ‘plug and play’ with these API’s with many use cases already outlined.

Potential use cases

As capable as these functions already are, improvements are happening all the time.  While the potential scope is staggering, the following cases are based on the currently available. There are potentially many, many more – the sky really is the limit.

Cardless, pinless entry using facial recognition only

This is a camera used to view a person’s face, which then gets integrated with the facial recognition API’s.  This then sends a response, which can be used to either open the entry or leave it shut. Not only does this improve security, preventing the use of someone else’s card, or pin number, but if someone were to follow another person through the entry, security can be immediately alerted. Additional cameras can be placed throughout the secure location to ensure that only authorised people are within the specified area.

Our own test drive use case

As an extension of the above cardless, pinless entry using facial recognition only use case, additional API’s can be used to not only determine if a person is authorised to enter a secure area, but to check if they are wearing the correct safety equipment. The value this brings to various occupational health and safety functions is evident.

We have performed the following scenario ourselves, using a selection of API’s to provide the alert. The video above demonstrates a chef who the API recognises using face detection.  Another API is then used to determine that he is wearing the required head wear (a chef’s hat). As soon as the chef is seen in the kitchen not wearing the appropriate attire, an alert is sent to his manager to report the incident.

Technical jargon

To provide some understanding of how this scenario plays out architecturally, here is the conceptual architecture used in the solution showcased in the referenced Video.

Architecture Pre-requisite:

·        Face Repository / Collection

Images of faces of people in the organisation. The vendors solution maps facial features, e.g. distance between eyes, and stores this information against a specific face. This is required by the succeeding video analytics as it needs to be able to recognise a face from various angles, distances and scenes. Associated with the faces are other metadata such as name, date range for permission to be on site, and even extra information such as work hours.

Architecture of the AI Process:

·        Video or Images storage

Store the video to be processed within the vendors storage location within the cloud, so it is accessible to the API’s that will be subsequently used to analyse the video/image.

·        Face Detection and Recognition API’s

Run the video/images through the Face Detection and Recognition API to determine where a face is detected and if a particular face is matched from the Face Repository / Collection.  This will return the timestamp and bounding box of the identified faces as output.

·        Frame splitting

Use the face detection output and 3rd party video library to extract the relevant frames from the video to be sent off to additional API’s for further analysis.  Within each frames timestamp create a subset of images from the detected faces bounding box, there could be 1 or more faces detected in a frame.  The bounding box extract will be expanded to encompass the face and area above the head ready for the next step.

·        Object Detection API’s

Run object detection over the extracted subset of images from the frame.  In our scenario we’re looking to detect if the person is wearing their required kitchen attire (Chef hat) or not.  We can use this output in combination with the person detected to send an appropriate alert.

·        Messaging Service

Once it has been detected that a person is not wearing the appropriate attire within the kitchen an alert mechanism can be triggered to send to management or other persons via e-mail, SMS or other mediums. In our video we have received an alert via SMS on the managers phone.

Below we have highlighted the components of the Architecture in a diagram:

Conclusion

These are just a couple of examples of how we can interact with such powerful functionality; all available in the cloud. It really does open the door to a plethora of different ways we can interact with videos and images and automate responses. Moreover, it’s an illustration of how we can analyse what is occurring in our data, extracted from a new medium – which adds an exciting new dynamic!

Video and image analytics opens up immense possibilities to not only further analyse but to automate tasks within your organisation. Leveraging this capability, the exposé team can apply our experience to your organisation, enabling you to harness some of the most advanced cloud services being produced by the big vendors. As we mentioned earlier, this is a space that will only continue to evolve and improve with more possibilities in the near future.

Do not hesitate to call us to see how we may be able to help.

 

Contributors to this solution and blog entry:

Jake Deed – https://www.linkedin.com/in/jakedeed/

Cameron Wells – https://www.linkedin.com/in/camerongwells/

Etienne Oosthuysen – https://www.linkedin.com/in/etienneo/

Chris Antonello – https://www.linkedin.com/in/christopher-antonello-51a0b592/

 

Our YouTube channel

Our youtube Channel

We have a growing list of videos on our YouTube channel where you can find some selected case studies, test drives and solutions. Get an inside look at the world of Smart Analytics.

Topics include:  Advanced Analytics, Cognitive Intelligence, Artificial Intelligence, Augmented- and Virtual Reality, IOT and Business Intelligence

Feel free to subscribe as we are constantly adding new videos.

Our YouTube channel

 

Chatbots – how the Azure bot framework is changing the AI game

What are Chatbots?

Communication underpins intelligence. And language underpins communication. But language is complex and must be understood through the prism of intent and understanding. For example:

Take the term, “thong” – in Australian slang this means flip-flops, a meaning lost on someone not familiar with Australian slang, as it means underwear in most other countries.

This is where bots, specifically chatbots come into play. They allow users to interact with computer systems through natural language, and they facilitate the learning and training of, amongst others, language, intention and understanding through machine learning and cognitive APIs.

It is important for the chatbot to be able to leverage trained understanding of human language so that it knows how to respond to the user request, and what to do next. And so, when “John” (who you will meet below) interacts with the computer with the question “do you sell thongs?” the computer understands what it means within the correct context.

Sounds cool, but complicated? Things have become much easier

Five years ago, embarking on a project to build an intelligent chatbot would have been an exercise involving an array of specialists assisting in the interpretation of natural language processing.  It wasn’t something that was affordable for companies other than those in the Fortune 500.

How times have changed – with the development of natural language processing toolkits and bot building frameworks such as wit.ai and api.ai. these tools have allowed web application/lambda developers to have the means to create intelligent yet simple chatbots without the requirement of a natural language processing specialist.

There are different options available to build a chatbot, but in this article, we investigate the Microsoft bot framework and introduce our own EVA (the Exposé Virtual Agent) – a chatbot built within the Microsoft bot framework. But first, let’s have a quick look at why businesses should care (i.e. what are the business benefits)?

Why should businesses care?

It’s mostly about your customer experience!

We have all dealt with customer call centres. The experience can be slow and painful. This is mainly due to the human staff member on the other side of the call having to deal with multiple CRM and other systems to find the appropriate answers and next actions.

Chatbots are different. Providing they can have a conversation with the customer, they are not limited by technology as they have the ability to dig through huge amounts of information to pick out the best “nugget” for a customer. They can then troubleshoot and find a solution or even recommend or initiate the next course of action.

Let’s look at how this can be achieved with the Microsoft Bot Framework.

What is the Microsoft bot framework?

The Microsoft bot framework is a platform for building, connecting, testing and deploying intelligent and powerful bots.  The bot framework works by providing a tool that allows you to bring together all the Microsoft bot related technologies together; easily and efficiently. The core foundation of this framework is the Azure Bot Service.

The Azure Bot Service manages the desired interaction points, natural language processing tools and data sources. This means that all of the interactions go through the bot service before they make use of any natural language or cognitive toolkits, while also using these interactions to utilise information for a variety of data sources; for example Azure SQL Database.

In figure 1, “John” interacts with the Bot Service via a channel (that thing they use to communicate with the Computer in Natural Language). Many readers will already have used Skype and Slack to interact with other humans. They can now use this to interact with Computers too.

Bot Interaction
Figure 1

John is essentially asking about Thongs, its availability and ends up with all the information he needs to buy the product. The Bot framework interacts with the broader Cognitive Services APIs (in this example Language Understanding and Knowledge Base) and various external sources of information, whilst Machine Learning continually learns from the conversation.

Let’s look at a local government example:

A council ratepayer interacts with the council’s bot via the council website and asks for information on the rubbish collection. At this point, the bot will simply refer to a particular knowledge base, and in addition other sources of information such as the website, an intranet site or a database.  The bot’s response is at this stage informative. A response could, for example, be, “Rubbish is collected each week in Parkside on Friday mornings between 530am and 9am. General waste must go in the red bin and is collected each week. Recyclables in the Yellow bin and Garden Waste in the Green bin is alternated each week”.

The user realizes he has no Green bin and so asks the bot where one can obtain a Green bin.

The bot now uses Language Understanding APIs and picks up the words “where can…be obtained” as the user’s intent, and “Bin” and “Yellow” as entities (that could easily also have been “Green Bin” or “Rates Bill”, etc.). This invokes an interaction with the council’s Asset application and an order of the Asset required, and likely also any financials that go with it through the Billing system.

The question, therefore, leads to a booking and a delivery and bill; all without having to visit or call the council office and no on-hold telephone waits.

Who is our own Eva?

Eva
Eva – Exposé Virtual Assistant

It’s just been Christmas time, and Eva joined festivities 😊

If you browse to the Exposé website, http://exposedata.com.au/, you will meet Eva if you select “Chat with us now”. Eva was initially (Eva version 1) built to act as an intermediary between the website visitor and our knowledge base of questions and answers.  She is a tool that allows you to insert a series of questions and she returns answers. She learns from the questions and the answers using machine learning in order to improve the accuracy of responses. The net result is users spending less time searching for information on our website.

Eva version 2 was meant to solve our main pain point – what happens if the content on the web (or blog) site changes? With Eva version 1 we would have had to re-train Eva to align with new/ altered content. So, in version 2 we allowed Eva to dynamically search our WordPress blog site (this is where most of the content changes occur) so as to better answer user questions with up-to-date information.

And if the user’s question could not be answered, then we log this to an analytics platform to give us insight as to the questions visitors are asking.

Analytics
Eva – Analytics

In addition, we trained a language model in Microsoft Language Understanding Intelligent Service (LUIS) and built functionality inside of the Azure bot service to utilize functionality from the WordPress Exposé blog.

An example of an interaction with Eva can be seen below. As there are a few blogs that involve videos Eva will identify the videos and advise the visitor if there is a video on the requested subject.

EvaInteraction

Eva clearly found a video on predictive analytics on the blog site and so she returns a link to it. But she could not find anything on cats (we believe everyone loves cat videos 😊) and informs the visitor of this gap. She then presents the visitor with an option to contact us for more information.

Eva has learnt to understand the context of the topic in question. The answer is tailored depending on how the question is asked about “Predictive Analytics”. For example…

Chat

Go and try this for yourself, and try and replace “predictive analytics” with any of the topics below to get a relevant and contextual answer.

  • Advanced Analytics
  • Artificial Intelligence
  • Virtual Reality *
  • Augmented Reality *
  • Big Data *
  • Bot Framework
  • Business Intelligence
  • Cognitive Services *
  • Data Platform
  • Data Visualization *
  • Data Warehouse
  • Geospatial
  • IoT *
  • Machine Learning *
  • Predictive Analytics *

* Note that at the time of publishing of this article we only have videos for these topics. A comprehensive list of videos can be found here

Eva is ever evolving and she will soon become better at answering leading chained questions too.

GOTCHA: Eva was developed whilst the Azure Bot Service was in preview, but Bot names must now contain at least 4 characters.

Did this really help?

Often technology that looks appealing lacks a true business application.

But as you have seen from the example with Eva, we asked her about a video on a particular topic. Imagine using your intranet (e.g. SharePoint), data held in a Database or even an operating system as a source of information for Eva instead to interact with.

Authors: Chris Antonello (Data Analytics Consultant, Exposé) & Etienne Oosthuysen (Head of Technology and Solutions, Exposé)

Transforming the business into a data centric organisation through an Advanced Analytics and Big Data solution – our ACH Group case study

Big Data

An Advanced Analytics and Big Data solution allows for the acquisition, aggregation and blending of large volumes of data often derived from multiple disparate sources. Incorporating IoT, smart devices and predictive analytics into the solution.
Our ACH Group case study shows how a clever data platform architecture and design facilitates transformation into a data-centric organisation in response to comprehensive regulatory changes and to leverage opportunities presented by technology in order to create a better experience for customers and staff.

See the case study here: Exposé case study – ACH Group

See more about advanced analytics

Artificial Intelligence in Advanced Analytics Platforms

artificial intelligence

Artificial intelligence (AI) encompasses various technologies such as machine learning, natural language processing, deep learning, cognition, and machine reasoning. Usually, AI is defined as a biological system which is designed for computers to give them the human-like ability of hearing, seeing, thinking and then reasoning. One of the newest technology applications in businesses, Computer Vision, is an AI field that deals with how computers can be made to gain a high-level understanding of images and videos. The sub-domains of Computer Vision are video tracking, object recognition, learning, motion estimation, image restoration, etc.

According to a survey conducted by Narrative Science, 0% of businesses already use AI in some form or another, a figure set to increase at over 60% by the end of 2018.

Let’s look at a typical use case we are working on right now, after which we will compare the two exciting entrants into the area of Computer Vision.

Use case:

Marketing activities are centred around smart advertising in online platforms. The business wants to change the advertising to be based on a person’s demographics such as race, gender, and age which can increase the benefits for the company placing the advertising advertisement.

Related use cases (especially around emotion) is discussed in the short video blog here: https://exposedata.wordpress.com/2017/01/12/cognitive-intelligence-meets-advanced-analytics/

Two platforms compared:

The two emerging major services set to disrupt the  Computer Vision market are Microsoft Cognitive Intelligence and Amazon (AWS) Recognition. These services aim to place AI such as Computer Vision services in the hands of analytics developers or analysis by providing APIs/ SDKs which can easily integrate into applications by simply writing a few lines of code. The added benefit is the integration with their larger cloud-based offering which gives the businesses a quicker ROI, higher reliability, and lower cost.

Let’s have a look at Microsoft’s Cognitive Intelligence and Amazon’s Recognition based Object Identification, Text Recognition, Face Detection, Emotion (in depth) and Price.

Object Identification:

Amazon and Microsoft both provide APIs and SDKs to read, analyze and label various objects in images. Both Microsoft and Amazon services could identify and label the objects included in the uploaded image (with a calculated level of confidence as shown). However, Microsoft can also analyze videos in real time in addition to images. Figure 1 and 2 show the results of both platforms respectively.

1

Figure 1: Microsoft object identification results

2

Figure 2: Amazon object identification results

If you need to process videos, then Microsoft Cognitive Intelligence provides the superior service. It can also detect adult content and image or video category. However, if you are using images only, both products step up to the plate very well.

Text Recognition:

Similar to object Identification, we conducted a test to analyze images that include text. Unfortunately, Amazon doesn’t yet provide a full-text recognition service. The Microsoft offering can find, analyze and write back text in different languages. Figure 3 and 4 present the results by Microsoft and Amazon after analyzing the texts included in uploaded images.

3

Figure 3: Microsoft Text Recognition Result

4

Figure 4: Amazon Text Recognition Results

If you need to analyse text within images, the Microsoft service is at present the only option.  Amazon only shows that the uploaded image has text whereas Microsoft shows the actual text (even from multiple languages).

Face Detection:

One of the main applications of Computer Vision in AI is face detection. This can be extended to finding human demographics such as gender, age, emotion, wearing glasses, facial hair, ethnicity, etc. Figure 5 and 6 show our results.

5

Figure 5: Microsoft Face Detection6

Figure 6: Amazon Face Detection

Both Microsoft and Amazon have the ability to find demographic information such as gender, age, whether they are wearing glasses or not, having a beard, etc. Microsoft goes one step further as faces can be grouped into visual similarity (such as verifying that two given faces belong to the same person). In addition, Microsoft can process real-time videos of people.

Emotion in Depth:

Computer Vision analyses a person’s emotion by studying his/her face. It returns anger, sadness, contempt, disgust, fear, happiness, neutral and surprise percentages.

7

Figure 7 Microsoft Emotion in Depth

If a business requires the analysis of someone’s emotion then Microsoft can analyze and measure each of the emotions listed above based on faces. Amazon only returns the percentage of detected smiles. Also, Microsoft can process both images and real-time videos.

Service Price:

This is not a quote but highlighting the simple cost comparisons as obtained from the respective Microsoft and Amazon pricing websites:

For Object Identification and Text Recognition, Amazon is priced at $1.00 per 1000 images, compared to Microsoft’s $1.50 per 1000 images.

For Demographic Recognition (e.g. gender, age, wearing glasses, etc.),  Amazon is priced at $1.00 per 1000 images. Microsoft has a free plan if the number of calls is less than 30,000 per month, and above that, prices vary from $1.50 to $0.65 based on the number of calls.  In addition, Emotion “in depth” has its own prices at $0.10 per 1000 calls.

Amazon Recognition (all services): https://Amazon.Amazon.com/rekognition/pricing/

Microsoft object and text identification: https://www.microsoft.com/cognitive-services/en-us/computer-vision-api

Microsoft face detection: https://www.microsoft.com/cognitive-services/en-us/face-api

Microsoft emotion in depth: https://www.microsoft.com/cognitive-services/en-us/emotion-api

Summary of services:

The following table provides a summary of Computer Vision services between Microsoft and Amazon (at the time of authoring of this article).

t1

Conclusion:

Although Microsoft’s Computer Vision is in some areas more mature compared to the Amazon equivalent, it must be noted that Amazon’s Computer Vision services are much newer compared to Microsoft’s equivalent. We have seen a lot of investment by both vendors in this area, so expect Amazon to close the gaps in due course.  However, at the time of writing this, Microsoft is certainly leading the pack in Computer Vision.  But watch this space.