Summary: We peek behind the scenes at Google’s latest demos at its I/O Developer Conference. By analyzing their latest technology breakthroughs, we can derive an educated guess at where the future of search will be, and apply that to digital marketing.
If you want to cut to the chase and find out what to do about digital content marketing immediately, it’s quite simple. Ringo Starr’s hit song said it all: “Act Naturally.”
The big picture for Google search is that it wants to understand and respond adequately to natural, normal, everyday human conversation. Google doesn’t want users to have to type in specific queries with carefully-chosen keywords. Google wants you to be able to ask it anything the way you’d ask any person standing next to you. It wants to be human-like as possible. Google often states their Search mission statement as “to organize the world’s information and make it universally accessible.”
What this means for your content, is that you don’t need to do anything special for your content in order to rank highly. Just talk about the topic naturally, the way you would read in any magazine or newspaper article.
Even though we’re not quite there yet – true conversational AI is an ever-escaping horizon – Google is honing in on that target year by year, and so should we be adjusting digital content marketing efforts to be more in line with Google’s intent. But first, let’s unpack our new Artificial Intelligence toys.
Introducing Two New Google Technologies
The 2021 Google I/O Developer Conference included updates to its mobile Android operating system, updates to Workspaces to integrate Docs and other desktop office tools, and several other sundry bullet points relating to its many products. But the big news was two innovations that will impact Search:
- LaMDA – “Language Model for Dialogue Applications” – To help bots better mimic human dialogue.
- MUM – “Multitask Unified Model” – Gives Search AI more ability to understand complex search queries and discover intent (what you *really* wanted even if you were struggling to ask for it).
Neither of these are implemented in an official Google update yet. The point of a developer conference is to showcase upcoming innovations and especially to get all developers up to speed on new technologies and platforms.
On a side note, part of the MUM model project scope is to develop “multimodal models” of understanding queries and responding to them. This is so the perceived intent and response changes with the context of the medium, producing different interactions applied to text, audio, image, and video.
What Could This AI Model Look Like?
This stuff can get confusing for those who aren’t used to thinking in semantic, linguistic, and neurocognitive terms. Let’s stop here and try to define: What are we trying to do? The best example of an ideal level of human-computer interaction we can think of off the bat is the computer-human conversation from Stanley Kubrick’s 2001: A Space Odyssey. Bear with us, it’s geeky sci-fi, but there are important lessons here.
Think about how you’re used Google and other technologies this past month, and then think about this interaction between the Hal 9000 and an astronaut in the movie. Notice the things that Hal does which our best computers still don’t do:
- Initiates the conversation with a greeting
- Reports “everything’s running smooth” without being prompted for a status report
- Inquires about the human’s artwork and asks to see the drawings
- Recognizes the sketches as portraits of other astronauts and compliments their rendering
- Asks politely to “ask a personal question”
- Expresses an almost emotional state of paranoia about the mission
Notwithstanding the eventual outcome of this fictional story, we do see where there is room for improvement in human-computer interaction, even given as far as we have come. Moreover, the computer in the movie is showing agency – it has its own drives, goals, ambitions, concerns, and desires. We may never see that part happen at all, since we’re content to have computers remain passive servants. But we want them to have enough agency to be good servants.
Better Interaction Applied To Google
Presently, we still have to think in terms of keywords when we make search queries. For example, say you wanted a way to animate simple shapes and images in order to capture them for video, with the intent of making this part of a branded intro for your YouTube channel. We can go out and buy Adobe AfterEffects, but think like a “work-from-homer” on a budget here – surely there’s an inexpensive shortcut? Maybe we can just animate stuff in a web browser like we used to do with now-defunct Flash? Don’t we remember something about animating in XHTML / Canvas? So we type in:
- XHTML animation – This doesn’t seem to get us far, but we do see a new acronym…
- SMIL – We discover it means “Synchronized Multimedia Integration Language,” but we have to skip past other expansions of that acronym
- SMIL animation – At last we peep at a developer page at Mozilla called “SVG animation with SMIL” – hey, that sounds like close to what we intended!
- SVG SMIL – Finally we get a free, easy way to write a little bit of code, render it in a browser, capture it with Kazam, and we got a video clip! Now compose a simple tune in Music Maker Jam, pair them both up in OpenShot video editor, and we have a whole video intro ready to tack onto the beginning of videos on our channel.
You can see where we had to dodge several false leads, and rephrase our query carefully. We know better than to say “free video” to Google, that will get us nowhere. “Video editing” isn’t specific to what we want to do – we can already draw SVG in Inkscape and record it with Kazam, we just want a shortcut for moving SVG graphics around. If we tried to “make animations for free,” we’d end up with proprietary apps buying ad placement to promise us free trials before paid subscriptions, but we’re not ready for that kind of commitment.
Imagine phrasing this whole search to a human: “So I can draw graphics in SVG, and I want to animate them for video clips for YouTube videos. Is there a free, open-source way I can do that? Sort of like Flash?”
That would be an example of a complex query. It ties together knowledge of some graphics technology, and a sharp focus on intent – if you Google Flash and SVG long enough, you’ll get bogged down in discussions about compatibility across web browsers, because normally these are discussed in the context of publishing a web page. But no, we want to open an SVG doc on our own computer, there’s our spinning logo, capture, close.
Simple queries like “recipe for hummus” are far easier to derive intent. But we all know the frustration of having to rephrase a query over and over, because we don’t know the magic keywords that will give us the right answer. This pops up all the time in our day-to-day work, especially when we’re trying to discover something that may or may not even exist.
MUM, the Multitask Unified Model, will attempt to better understand these complex queries and arrive at what the user is trying to ask. It’s also intending to incorporate 75 different languages, hoping to remove international language barriers. It’s a lofty ideal that is still in the future, but close to BERT, which we mentioned before. “BERT” stands for “Bidirectional Encoder Representations from Transformers,” which was technical talk for “trying to understand natural language queries,” first introduced in 2018. BERT is one of many transformers, an AI method that aids in Natural Language Processing (NLP).
As for LaMDA, “Language Model for Dialogue Applications,” this is aimed more at chatbots, virtual assistants, and voice-activated systems like Alexa. While it doesn’t impact text-based content queries, Google and other tech giants have been working on voice-search for years. The difference is that, instead of typing in queries and clicking on links, the voice assistant would work more like a concierge, offering to book your appointments, make reservations, or purchase tickets when you ask it about relevant topics.
Future Conversational Search Applied To SEO
As we said up at the top, “act naturally” is the key takeaway to be ready for new, AI-conversing search trends. As long as you clearly outline your web page’s purpose and format your information with prudent use of headers, Schema, and UX design, your content will be there when Google needs to lead somebody there.
In fact, this is even better news for long-tail, niche SEO. One of Google’s stated objectives is to better serve the large volume of one-time queries from all us unique snowflakes. Even though common queries are something Google can serve by rote every day, it still sees highly specific, one-time queries which require a unique parsing and hopeful guess at the correct response.
There is no predicted timeframe for when LaMDA will be fully implemented. And rest assured, some skeptics wonder if we will ever get there. After all, Google’s answers are only as good as the information it finds. If that happens to be bad information, well, “garbage in, garbage out,” as they say. This 2017 video shows Google’s home assistant confidently giving “fake news” answers to several unlikely queries.
Our Director of SEO, John McAlpin, gave us his own summary:
“The key for marketers is to not optimize for current algorithm tech, but instead to optimize for where Google’s trying to go. Last year it was all about BERT and now it’s LaMDA and MUM. All of this tech is focused on improving search results for long-tail queries that Google hasn’t seen before.”
“Once our SEO fundamentals are set up on our clients, we need to focus on expanding our content towards the questions that aren’t showing up in our research tools. We need to focus more on the true user experience and ensure that the path to action is as streamlined as possible. Whether that’s improving site UX, or creating new content for mid-funnel and top-funnel searchers.”
Bottom line: Content marketing, which has already been focused on “write for people, not machines,” can expect to keep on doing more of the same. While keeping an eye on the same metrics and SEO objectives as before, we can afford to take a more human-focused tone with our content than ever before.