What does ongoing AI Wars mean for the technology sector?
We look at how some of the AI tools work, why Google needs to respond to OpenAI and Microsoft and what this means for the sector.
We are in the midst of the industrial revolution for Artificial Intelligence. The proliferation of AI services for consumers is something to behold. Google, in their recent announcement for Bard, makes reference to a study that postulates “the scale of the largest AI computations is doubling every six months far outpacing Moore’s Law”. This level of progress is staggering and looks to only increase.
What are Large Language Models, and how do they work?
Large Language Models (LLMs) are models that are trained on billions of points of data in a particular language and attempt to predict the next word/sentence. ChatGPT is an example of a LLM. ChatGPT was trained using human feedback, you get it to generate multiple responses given a prompt, you show them to humans and the humans select the response they like the most. Instead of using this feedback directly, it’s used to train another model which tries to predict which future response a human would like more, this is then used as the reward function to train the LLM. The reward model allows training to scale as it imitates humans and thus does not require thousands of humans to train the LLM. The process is repeated, humans rank the new responses, that new information is used for training the reward model, etc. until the LLM is sufficiently trained in the task that you require.
There’s a flaw with this training system, the LLM is being trained on what response humans prefer rather than the factual information. ChatGPT is perhaps more likely to have a “stab” at predicting something that it doesn’t know rather than saying it doesn’t know as that’ll score more highly with the reward algorithm (and humans). I’m sure you’ve seen examples of where ChatGPT made up studies or where an obvious fact was wrong, sometimes quite humorously, this is the flaw in action. Could a solution to this give the model real-time access to information - Wikipedia for example (I can hear teachers saying “never quote Wikipedia”), cleverer people than me are addressing issues like this.
Why did Google need to respond to OpenAI?
Before the acquisition of OpenAI by Microsoft, Bing (Microsoft’s search engine) represented just 9% of all global web searches. Now, OpenAI (Microsoft by proxy) obviously has first mover advantage in this market and the acquisition by Microsoft set off alarms at Google. This is maybe the first time there truly has been a threat to Google’s main revenue source, search. The ability to drill down into a response and clarify exactly what you mean instead of trawling through page upon page of google results could see the end of 'search' as we know it. Microsoft has already announced that they’re going to incorporate ChatGPT into Bing. It’s legitimate for Google to be concerned about OpenAI and other industry disruptors (I’d be surprised if the other industry megalodons haven’t been investing heavily in AI tools) as they risk losing the monopoly they have on search. They would go on to announce Bard, a direct competitor to ChatGPT. Google has also recently invested nearly half a billion dollars in Anthropic, another competitor to ChatGPT (incidentally started by former employees of OpenAI).
Will Bard be better than ChatGPT?
Other than the questionable choice of the name, we do not know what Bard will be like. In the announcement blog post, Google mentioned that they will initially be launching a smaller model but gave no indication of the size of that model. It’s important to note here that the size of the model doesn’t necessarily mean it will be “better”. As mentioned earlier, it could just mean the model is better at people pleasing. Google has been publishing immense progress in the Artificial intelligence sector however, MusicLM is a prime example of this. It seems that Google has made huge strides forward in this sector but has not mastered the commodification of it as OpenAI has. The competition will certainly drive innovation.
What does all this mean for the computer programming industry?
Google’s Deepmind’s mission is “Solving intelligence to advance science and benefit humanity”, aka solve intelligence, then solve everything else. I think in the immediate future (next 2 years) these AI tools will be just that; tools to help Engineers build, beyond that it’s difficult to see how advanced they’ll become and whether they actually will replace Software Engineers. Tools like ChatGPT and Github Copilot can already generate pretty robust code (although this depends on the training data again, provided buggy code the models will generate their own buggy code), it’s not hard to imagine for an AI to generate a whole website or mobile app complete with the Xcode build files etc. I wonder how far away from that we actually are.
If you want to go deeper into this subject here are a few videos I recommend:
Anything by Rob Miles on this subject in fact
Google Panics Over ChatGPT [The AI Wars Have Begun] - ColdFusion
Looking for something else?
Search over 400 blog posts from our team
Want to hear more?
Subscribe to our monthly digest of blogs to stay in the loop and come with us on our journey to make things better!