Once again BIMA Scotland partnered with Cohaesus to deliver our signature breakfast briefing format of snappy, insightful and informal presentations from a formidable panel of speakers. The session was expertly moderated by Jonathan Seal, Strategy Director, Mando and founding member of the BIMA AI Think Tank. The topic, AI – Beyond the Chatbot, built on the discussions of the Cohaesus led roundtable in Edinburgh in August on a similar subject.
Therefore, what better way to start the presentations than with Helena McAleer, Innovation Manager, We Are Social, sharing with us their 3-step strategy for building a successful bot, using Domino’s pizza “Dom” as the case study. Helena opened by observing that chatbots have received a lot of negativity from users and the media but explains this is largely due to mass entry of chatbot-suppliers to market, wanting to jump on the band wagon but without the know-how to develop a good product. Hence there is a lot of stuff out there that is badly built and not fit for purpose. Helena then walked us through a few key strategies to build a good bot:
CREATE A CHARACTER – in addition to wanting to build a bot with personality (into which much planning went to get that right), Helena emphasised the need to consider that ultimately Dom was responsible for delivering food and taking money so above all else must be reliable and honest – a personality you can trust.
CONVERSATION – this was designed around the transactional process the client wants. Make it as quick and easy as possible. Make sure there are no barriers for the commerce to happen.
DON’T LET THE INTERNET BEAT YOU – Helena explained the team put huge effort into anticipating all the possible chat Dom might get so that they could create all the appropriate responses. This led to Comedy Central picking up a conversation with Dom which got a 10/10 for comedy value.
It is clear the success of Dom (as with any bot) is down to a lot of thought and planning first. As Helena says “Chatbots have feelings: copy first; not code”.
Next up was Matt Meckes, CTO & Co-Founder, Cohaesus who narrated moral tales and stories of innovation to make us all think about how we should utilise machine learning to save us from the robot delinquents. Matt took us on a quick historical journey to explain how over a pretty short period of time we no longer need to build expensive, time consuming models which are now available off the shelf but there are huge challenges with data. Gathering and labelling vast amounts of data is a crucial part of the process, as is removing bias from the process, which can be challenging when bias is not always acknowledged. Matt used great examples to illustrate his point and two that stood out as opposite examples:
- The Cucumber famer in Japan without AI expertise but vast cucumber data, and using a google off the shelf product, was able to build a cucumber grading robot
- A programme used by a US court (Compas), was much more prone to mistakenly label black defendants as likely to reoffend – wrongly flagging them at almost twice the rate as white people (45% to 24%)
As we know there are powerful machine learning tools but with the wrong or incomplete data they are a recipe for disaster. As Matt points out: there is too much emphasis on machine learning and not enough on machine teaching. Avoid big models and rather break the process into lots of small models for greater data accuracy and to avoid bias, i.e.: constraining the search space gives more transparency. Focus on the science and art of teaching.
Finally, tell people why the data is right. We must listen to, and learn from the domain experts – they won’t listen to the data unless the understand the logic and learning behind it. It is important to understand why.
Matthew Higgs, Data Scientist, The Data Lab, was next up and wanted to share his enthusiasm for using AI in the creative process. He opened by explaining, tongue firmly in cheek, the different definitions of AI and machine learning: use AI when raising funding and machine learning when trying to recruit data scientists.
For Matthew the most exciting thing in the future will come through human-machine co-creation. He shared some examples of machine learning and how it impacts the creative process:
IDEATION –new generative models based on current real data. Matthew cited three examples: artwork where you could not tell which was human generated and which AI; bike design building options through AI rather than physical build; creating images of people that look like celebrities but actually are not real people.
PROTOTYPING – taking ideas you can convert into something you can see, touch, play with. Taking sketches of UI automatically mapping those through image recognition machine learning to create interfaces you can play with and pass onto users to test.
EVALATION – feedback on those prototypes. How do we collect data on human preferences for better creative design? A good example of this in practice is Mallzee.
In conclusion Matthew points to three key takeaways:
- Creativity is a process with feedback loops
- Machine Learning is just a tool
- Machine learning can blur supplier/creator/consumer.
Matt concluded the future is exciting, and handed over to Vicky who had some words of caution amidst assessing strong commercial value in using data in machine learning intelligently.
Vicky Brock drew upon her personal experiences and successful career in marketing and data analytics to emphasise her message – know the value of your data. She described herself as “a walking broadcaster of data”. She shared with us her own love of data from sleep patterns to personal website data and how that data can influence activity and decision making. Her message to brands and corporates – I may choose to interact with you intelligently if you interact with me intelligently.
And yet despite all this personal data that is captured on us there is still a data blackhole for marketeers to address. Our actions might be easily captured but our motivations are not. You can start to fill the gaps of the blackhole by looking at the rich context of data. Hence, echoing Matt Meckes observations around the importance of detailed, properly labelled data. To make decisions about behaviours we must go deeper into the data to make sense of it if we can.
Vicky uses an example of an online shopper who on the face of it is a great customer spending plenty of cash throughout the year on clothes, accessories and health & beauty. But when you overlay that with data on customer returns and the cost of servicing that customer their net value to the business was -£7. However, look at the data again and you can see that if they only bought health & beauty products they become a very profitable customer. Hence “small data models” providing valuable information on the true worth of the customer and how to drive future behaviour with them. The big model does not provide the full analysis.
Vicky closes by lamenting that the stable door to our data has been opened some time ago and the horse well and truly bolted. That doesn’t mean we shouldn’t continue to do everything we can to protect our data and that corporates shouldn’t use it appropriately.
Leading on from another cautionary tale the panel closes with Andrew Bruce, Digital Planner, Screenmedia looking ahead to next 5 years of conversational interfaces. What can we expect and will the lessons of Vicky and Matt be heeded? Andrew opened with a quick overview of the current state of the nation:
- Estimated 20 million smart speakers sold
- We have seen 130% growth in the voice first market
- Amazon Echo has 15,000 skills available – but don’t get too excited 150 are purely cheese related
Andrew then quotes Golden Krishna of Google, “The best interface is no interface” and shares with us some of the characteristics voice interface needs to take on to improve:
BIOMETRICS – can deliver much better personalisation if it can identify you from other people in the room, eg: picking out a voice among many.
PAYMENTS – there is much scope for development here as there is often need to leave the platform to then pay. It is assumed that this is something Google and Amazon will be focussing on.
STATE MAINTENANCE – currently devices have low memory and don’t connect so the development in place is, for example, to allow you to continue a conversation from in the car to when you get into the house as you would with another human. Currently you would need to restart the conversation.
PRESENCE DETECTOR – “this is quite creepy” but the value is that it can sense who is in the room without speaking and therefore will create sensitive approach to conversation. Similarly, the TV would revert to the type of programming relevant to those in the room and likewise advertising would adjust too.
MACHINE VISION – this is AI for deep recognition of data in a photo, helpful for very advanced search and find. “What was that bag I found yesterday?” “Can you find me other alternatives to that bag?”
Some lively Q&A ensured and then Jonathan thanked our fantastic panel of speakers, each bringing their own unique experience and perspective to the theme.
Thank you to the BIMA AI Think Tank and especially our event partner Cohaesus.
Our event partners Cohaesus are lovers of all things digital. Our wide range of services include digital architecture, platform strategy, customer experience, CX and UX, design and build, IOT, AI, and E-Commerce.