In the digital strategy sessions I run we look at the operational aspects of digital marketing but also cast a wider view of emerging technology and trends. Often these trends take many years to become relevant and this is the case with machine-learning and its development into Generative AI that has taken the world by storm this year.
My interest in machine-learning began following a conversation with a Google engineer in 2014. ‘We used to know what was in the algorithm but we don’t now’, the engineer commented to me about the software behind Google’s ubiquitous search product. ‘A couple of years ago, one of my colleagues would have written the code to define the ranking factors but now we tell the algorithm our preferred outcomes and it codes itself,’ he continued. My first reaction to this was to ask, ‘have you not seen Terminator?’.
However, with hindsight, I realised this was the first time I’d heard of a full-blown application of machine-learning in the digital marketing ecosystem, which the following year, in 2015, was launched formally as RankBrain – an update to the Gordian Knot that is the Google search algorithm. The idea of RankBrain was to enlist the power of machine-learning to better understand the intent being individual searches – particularly ones the system hadn’t seen before – which makes up approximately fifteen per cent of the 8.5 billion Google searches carried out each day.
So, yes, my first reaction to apparently intelligent technology was ‘Killer Robots!’ - jumping straight to Skynet from the film Terminator, the fictional artificial intelligence that becomes aware and decides pesky humans are just getting in the way. Jump to 2022 and it seems...
...that many people have had the same reaction at first contact with technology that can appear to be 'alive'. However, as Sam Altman, the boss of Open AI, the company behind Chat GPT puts it, 'it's very important that we try to educate people that this is a tool and not a creature'.
Shoulders Of Giants - Not Robots
Subsequently, I began to delve into the world of machine-learning and realised it was actually a Moore’s-Law powered development of an idea that had been initiated in the 1950s, by brainiacs such as Frank Rosenblatt and his Perceptron, or, twenty years later Karen Jones’ snappily-titled Term-Frequency-Inverse Document Frequency. So not, ‘Killer Robots!’, but the long-term, shoulders-of-giants development of mathematical modelling that had slowly become more powerful over a 60-ish year period.
My interest peeked, I then followed the progression of machine-learning in digital marketing including the introduction in late-2018 of BERT, described at the time by Google as, ‘as the largest change to its search system since the company introduced RankBrain, almost five years ago, and probably one of the largest changes in search ever.’
Despite the cuddly-Muppet style name, BERT stands for, ‘Bidirectional Encoder Representation from Transformers’. Hardly a name to set the pulse racing but an early application - at-scale - of the breakthrough Transformer technical architecture created by a Google Research team in 2017; a development which ‘cracked’ Natural Language Processing and accelerated innovation industry-wide. Not least at Open AI which used the breakthrough Transformer technique to build Large Language Models (LLMs) called Generative Pre-Trained Transformers – aka GPTs; the third and fourth version of which created a moment in late 2022 that led to Alphabet boss Sundar Pichai going, ‘code red’. Presumably when he realised he had been outmanoeuvred by his own technology.
The Business Of AI
Indeed, the commercial story behind the development of AI is just as interesting as the technology itself. As described previously, to fully understand what’s happening in the vast global digital marketing ecosystem it’s important to consider the human, ego-driven politics of digital marketing and its Game of Thrones’ qualities. And that’s certainly true when it comes to Generative AI.
Initially, Google were happy to let legendary tech-head, Jeff Dean, and his mega-powered team loose on developing AI technologies which could then be included in Google’s RankBrain, BERT and smart display advertising techniques, as well as at DeepMind – the ‘other’ Google AI crew.
However, Google also open-sourced their AI developments, including Transformer technology – maybe to allow the research team to flaunt their tech-chops and enjoy a moment in the Silicon Valley sun. However, in 2018, this open-source gift was picked up by Sam Altman et al at Open-AI, a company that had begun life as a ‘non-profit’, albeit with backing from the deepest pockets including Elon ‘Tesla’ Musk, Reid ‘LinkedIn’ Hoffman and Peter ‘Paypal’ Thiel.
Open AI used this cutting-edge Transformer architecture to create Chat-GPT and then, a year later, shifted from, ‘non-profit’ to ‘capped for-profit’. This allowed Microsoft boss, Satya Nadella, to snaffle up a chunky stake in the company built on the back of technology created by the smartest bods at his arch-rival Google. Truly, a Game Of Thrones moment.
'Sorry Dave'
So that was then but what about now? And most importantly what about the, ‘Killer Robots!’? Open AI made Chat GPT available to the public at the end of 2022 and this led to an almost immediate wave of fears about where it may lead. This world view has become known as the AI ‘doomster’ outlook which appears to be based on concerns that Generative AI will, like HAL 9000 in Stanley Kubrick’s film 2001 : Space Odyssey, become conscious and decide that humans are getting in the way - sorry Dave. In current AI speak, it will ‘take-off’.
Quite how this happens isn’t entirely clear but the fact that the people who have created the technology seem more concerned than anyone else probably doesn't help. That said, some smart people - including Marc Andreessen - believe the AI innovators are being naïve and just shooting themselves in the foot, thinking that legislators will let them self-govern, when in fact politicians may take the reins and bring the whole AI shebang to a grinding legislative dead-end. Which Andreessen points out is what happened with the regulation of nuclear power in the USA.
The reality is no one knows how generative AI will change the world but we can point to a few indicators.
Firstly, the appearance of the world-wide-web was spoken about in similarly dystopian terms with people finding it hard to imagine life without the Yellow Pages or Encyclopaedia Britannica, being concerned about widespread criminal activity and a dearth of employment. Now, about thirty years after its launch, of course the web has changed the world - for good and bad. As I once heard Sir Tim-Berner’s Lee say of his own creation, ‘the web is merely a reflection of humanity. If your sink is blocked for a few weeks and you remove the plug to dig out the obstruction, the offending mangled mess of soap, fish bones, human hair and other horrible gunk in which bacteria has set up home, is exactly like the World Wide Web.’
Secondly, machine learning, albeit in the guise of neural networks rather than Transformers, has already produced massive breakthroughs that will accelerate productivity in important areas of science. For example, in 2020, Google-owned DeepMind launched Alpha Fold - software that cracked a long-term problem in biology about protein-folding – basically how proteins work in the human body. As DeepMind boss Demis Hassabis described, ‘so the rule of thumb in experimental biology is that it takes one PhD student, their entire PhD to do one protein. With AlphaFold Two we were able to predict the 3D structure in a matter of seconds. And so over Christmas, we did the whole human proteome or every protein in the human body or 20,000 proteins’. Research that may help find cures for diseases such as Alzheimer's. So not, ‘Killer Robots!’ – more like, 'Doctor Robots!'.
Thirdly, there are plenty of possible positive outcomes that anyone who has been playing around with Generative AI could appreciate. Coders and software developers look set to become more productive as AI, such as Microsoft’s Co-Pilot, means time-consuming interaction with huge open-source coding libraries such as Stack Overflow or GitHub, can be shortcut by asking the AI to find what’s required. In education, children – and anyone else – will find they have an infinitely-patient personal tutor that they can ask to teach them anything at all. And if the explanations are too complicated, they can ask for them to be simplified or made more challenging. Or in medicine, researchers, scientists and doctors look set to have access to greatly-improved testing and scanning capability, as DeepMind have shown. Of course, as with all new developments, others will use the technology for nefarious purposes – but that’s probably just a reason for the good guys to jump in and get going.
And finally, but most importantly, at the end of 2001 : Space Odyssey, the spaceship commander, Dr David Bowman, defeats HAL….by turning it off.
Furtermore, the future of Generative AI is littered with barriers to development. For instance, a lot of large online platforms, including Reddit and Wikipedia, are not happy that the Large Language Models such as Chat GPT have been ‘trained’ using the content from their sites – and may prevent access to the development of future LLMs. New methodologies may appear, such as synthetic data, but there’s a lot of uncertainty about their effectiveness.
Artificial General Intelligence
Much of the fear around Generative AI is based on the assumption it will become something else ie Artificial General Intelligence. That instead of just sucking up the entire web and creating a handy chatbot or image-creation service, it will morph into an independent being that will immediately seek to destroy humankind. This, at best, seems like a stretch. In fact, generative AI can be thought of as a new ‘layer’ that’s been added to the piles of online technology that have built up over time. Firstly, there was ARAPNET used by a handful of scientists in the 1960s to send each other messages. Then in the early eighties, Vint Cerf and Bob Kahn’s TCP/IP was added creating a network of networks. Then Sir Tim Berners-Lee’s web was built on top creating a platform for websites. Then the mobile web was developed creating a platform for apps. And now a generative AI layer has appeared.
Many smart folks, including Zuckerberg, believe, rather than there being one all-seeing omniscient AI, Transformer architecture has created foundation models – which will be the next platform for the creation of lots of smaller, specialist AIs. If only because the foundation models are vast and very expensive to build and run. Meta's open-source LLaMA Large Language Model has 65 billion ‘parameters’ – the variables the algorithm uses – and was trained on 1.4 trillion tokens (ie words). But that’s considered small. Chat GPT-4 measures its parameters in the trillions and Sam Altman, the Open AI boss claims it cost $100m to train.
However, using access to Zuckerberg’s open-sourced model, smart individuals are now running their own experiments using smaller language models, including running versions on normal laptops and smart phones. A wave of innovation has, as a result, already begun, that promises to be so rich some refer to it as a Cambrian Explosion – when life on earth took off millions of years ago. A powerful image but one which is likely to be hyperbolic in the opposite direction to the Doomster disaster movie.
Deep-seated, understandable human fears about change and the role of technology have always been with us. Amazingly enough, more than one hundred years ago, EM Forster wrote a short story called, ‘The Machine Stops’ that was a view of mankind captured by a vast network of machines upon which people became entirely reliant. Forster wrote of, 'Technopoly', a kind of religion, being established in which, 'The Machine' is the object of worship. Over time, people forget that humans created The Machine, and treat it as a mystical entity the needs of which supersede their own. Until, one day, a young man called Kuno decides he’s had enough, ignores the prevailing view and goes off to explore the natural world - while the machines withers away.
In conclusion, the vast, swirling networks of the internet and world wide web is a primordial soup of innovation, technological wizadry and vast capital input that from time-to-time settles down into a steady state, and then apparently from nowhere, shapeshifts into a new form - seemingly overnight. The reality is that these changes are years in the making and by paying attention to the underlying trends we can see them coming and - like Kuno - stay grounded in the real world.
Comments