Algorithms are everywhere.
Chances are that you’ve already been on the receiving end of several algorithms since you’ve been online today.
Been on YouTube recently?
If so, YouTube’s algorithm has recommended videos that you’re likely to watch.
Bought something on Amazon?
Well those handy buying recommendations on the homepage – they’re carefully selected by Amazon’s algorithm to maximize your chances of adding just one more thing to your cart.
From banking to social media, pretty much every element of our digital lives today are directed in some form by algorithms🤯
Wondering how these algorithms work? This all comes down to the subject of machine learning and training algorithms to learn what’s right from wrong. Read on to learn about how machines learn exactly what you want and what you don’t.
A history of algorithms: human vs machine
In the early days of programming, algorithmic bots were quite literally ‘programmed’. This means that a human would provide the bot with simple to follow instructions that other humans could follow, explain and understand. This is the classic “if this, then that” scenario.
For a while, this method worked well, particularly when it came to solving simple problems.
However, as time went by things got more complicated and simple instructions just weren’t enought to cut it in today’s highly complex digital world. When dealing in the billions, photos on social media, videos on video sharing platforms, search engine results, or financial transactions quickly become incredibly complex, incredibly quickly. Simple, human-coded algorithms were no longer up to the task as it became clear that the bigger the data, the bigger the headache.
Today, algorithmic bots are able to take big problems (and big data) and in most instances, do a much better job than a human.
Here’s where things get a bit weird though.
You see, the more we use these bots (and the smarter they become), the less we actually know about exactly how they work.
How Machines Learn
You may have heard about algorithms being ‘proprietary’ to specific companies.
What this really means is that the companies don’t want to talk about how their specific bots actually work as this is perhaps their most valuable asset – kind of like the ‘secret ingredient’ in a leading soda that gives them their edge over the competition.
A good example of this in the digital world is Google’s search algorithm which is always evolving, but the fundamental workings of which are its most closely guarded trade secret.
So how do these machines work and more specifically, how do these bots learn?
Well, while we can’t know the specifics of every algorithm out there (and there are a lot), there are some essential elements which are universal when it comes to understanding how machines learn.
The Problem with Teaching Machines
A classic machine learning challenge involves image recognition – for example getting a bot to recognize the subject in an image and maybe sort it into piles separating cats from dogs.
This is such an easy task for a human that we might assume that it would be the same for a bot, after all the differences are obvious right?
Not if you’re a machine that has no concept of dogs or cats.
More than this, it’s simply not possible to tell a machine in its language how to distinguish the two concepts. We could list out the differences in our language but to a machine that also has no concept of these things, this would be no good either.
Due to the immense complexity of our brains, we’re simply able to know the difference from an incredibly early age as our neurons make ever more connections to allow us to grasp the subtle conceptual differences between canine and feline.
So how can we get a machine to learn to carry out this task?
By building other machines of course!
Building and Teaching a Machine that Learns
Because it’s simply not possible for us to build a bot that ‘just knows’ the difference between cats and dogs ourselves, the answer to this problem comes from building other more simplified machines – ones that can build the bots and ones that can train them.
It’s due to the simpler and more specialised nature of these individual bots that human programmers can make them (in exactly the way that we used to make simple algorithms that didn’t need to deal with tons of data).
Of course our end goal here is to recognise and sort billions of photos of cats and dogs and this task is beyond the skill set of mere humans, so we need to build two simple machines that can work together to make a new machine that can get the job done.
This process involves our building and teaching machines to build and teach a lot of bots in a process which will go through recurring stages of improvement with huge numbers not making the cut and eventually one ‘star pupil’ getting the job done flawlessly.
On the one hand we need a bot that can ‘assemble’ our candidates for training.
The goal of this bot is simple – put together as many candidate bots as possible to be sent for training by the trainer bot.
There’s no intelligence to this process and a good way to think of this is that the assembler simply wires the bots randomly and sends them on their way to be trained.
With the candidate bots now built and ready for training, the training bot is provided with a set of data which the human provides (in our case a load of cat images and a load of dog images) and a key for distinguishing which is which.
The trainer bot can’t actually teach the concepts we need to the candidate bots, as like the candidate bots, it also doesn’t know how to recognise a dog or a cat.
However, armed with the data set and key that we provided beforehand, the trainer bot is going to be an excellent tester.
After this first round of ‘testing’, the candidate bots are split into those that did worst and those that did better.
The better candidates are replicated with changes made in different combinations and then sent back to be tested once again.
This process is repeated again and again with each recurrent test leading to a new pool of candidate bots which are increasingly successful in recognizing the difference between cats and dogs.
What this means is that the creation of two relatively simple machines (one for assembling, one for training) combined with many, many repetitions of selective candidate tweaking and retraining over multiple iterations ends up with an eventual bot that now knows how to separate cats from dogs.
In order to get to this stage, the benchmark for what’s considered a ‘pass grade’ increases over subsequent iterations with more and more candidates failing and being discarded while the more successful bots are copied and tweaked until eventually there’s a bot that rarely fails.
How it Works
While it seems like this combination of unintelligent components shouldn’t be able to learn to carry out a task, the fact is that over enough iterations, it eventually does.
A good way to think of this is a bit like the Infinite Monkey Theorem which says that given enough time, a monkey hitting random keys on a typewriter would eventually write the entire works of Shakespeare.
In the case of our learning candidate bot scenario (swapping cats and dogs for text), after each iteration is complete, the closest thing to a work of Shakespeare would be kept and all of the (more) nonsensical text would be thrown out.
This would be repeated again and again keeping what works and making,making minor adjustments to the next batch of candidates each time until eventually, we have a perfectly formulated copy of Twelfth Night.
Going back to our original example. as to how the eventual bot that emerges from this intense training regimen is actually able to recognise and sort images from there on out, we still don’t actually know – it just does.
The major limitation to this kind of learning is that while the machine has now mastered the task of splitting images, it’s expertise is limited specifically to the type of task or question it’s been set – in our case, identifying and separating static images.
If this changes in a way that the bot doesn’t understand (for example a video of a cat), then it hits a roadblock and simply won’t know what to do with this.
Similarly, if the bot is likely to struggle to recognise that not all of the following are actually photos of dogs, then it will simply sort them all into our ‘Dog’ pile of images.
While both are undeniably sweet, unfortunately when it comes to training an intelligent machine, accuracy is everything.
It’s for exactly these kinds of situations that humans enter the mix once again to extend the training data to include the kinds of things that might trip up even the best animal image sorting bot.
You may or not know it but you’ve almost certainly helped with this not only in the data provided to organisations in your everyday online life, but also in the form of those almost always irritating “I’m not a robot” tests like this:
It’s exactly these kinds of human input that are being used to increase the training material to help machines learn and deal with those trickier nuanced differences that we humans have no problem distinguishing, but that aren’t so straight forward for bots.
Of course what’s even more useful for training certain bots is allowing them to learn directly from human users as and when they’re going about their daily online lives.
This is the strategy adopted by many of the large online platforms, with the system ‘learning’ and continually attempting to match your likely requirements to maximise your time on the site or positive feedback.
The more positive the outcome (user interaction, engagement or satisfaction), the better the machine has done and the more this should be replicated and improved upon in further iterations as we saw earlier.
If you’ve ever wondered how Netflix seems to know exactly what to serve you for an evening’s entertainment, now you know.
What this all means
Probably one of the most important things to take away from all of this is that when it comes to machine learning and the use of algorithm based bots as increasingly everyday tools in our world, it’s incredibly important to ensure we know what we’re asking these bots to do.
After all, we don’t really know how they’re actually doing their thing once we get to a certain point.
It’s critical therefore that we make sure we know exactly what we’re asking the intelligent machines around us to do through the kinds of training scenarios we set for them as well as the way in which we use different types of data to train them.