What does the search engine of the future look like? Forget 10 blue links...
You can break down the creative process into three big chunks: research, creation and revision. What happens in part depends largely on the kinds of content you're creating, the platforms you are making the content for, and many other factors.
Like every journalist, I spent a lot of time using search on the web to help with that research phase. I was quick off the mark with it, and I learned to adapt my queries to the kinds of phrases which delivered high-quality results in Google. My Google-fu was second to none.
But that was the biggest point: like all nascent technologies, I had to adapt to it rather than the other way around. Google was great compared to what came before it, but it was still a dumb computer which required human knowledge to make the most of out of it.
And Google was dumb in another way too: apart from spelling mistakes, it didn't really help you refine what you were looking for based on the results you got. If you typed in “property law” you would get a mishmash of results for developers, office managers and homeowners. You would have to do another search, say for “property law homeowners” to get an entirely different* set of results that were tailored for you.
Google got better at using other information it knows about you (your IP address, your Google profile) to refine what it shows you. But it didn't help you form the right query. It would ask you “hey, what aspects of property law are you interested in” and give you a list of more specific topics.
What's more, what it “knew” about you were pretty useless. You couldn't, at any point, tell it something which would really help it give you the kinds of results you wanted. You couldn't, for example, tell it "I'm a technology journalist with a lot of experience, and I favour sources which come from established sites which mostly cover tech. I also like to get results from people who work for the companies that the query is about, so make sure you show those to me too. Oh, and I'm in the UK, so take that into account."
Google isn't like that now. Partly that's down to the web itself being a much worse source of information. But that feels like a huge cop-out from a company whose mission is to “organise the world’s information and make it universally accessible and useful”. It sounds like what it is: a shrug, a way of saying that the company's technology isn't good enough to find "the good stuff".
The search engine of the future should:
Be able to parse a natural language query and understand all its nuances. Remember how in the Knowledge Navigator video, our professor could ask just for “recent papers”?
Know not just the kind of information about you that's useful for the targeting of ads (yes Google, this is you) but also the nuances of who you are and be able to base its results on what you're likely to need.
Reply in natural language, including links to any sources it has used to give you answers.
If it's not sure about the kind of information you require, ask you for clarification: search should be a conversation.
For the past few weeks, I've been using Perplexity as my main search engine. And it comes about as close as is currently possible to that ideal search engine. If you create content of any kind, you should take a look at it.
Perplexity AI allows users to pose questions directly and receive concise, accurate answers backed up by a curated set of sources. It's an “answer engine” powered by large language models (including both OpenAI's GPT-4 and Anthropic's Claude 2). The technology behind Perplexity AI involves an internal web browser that performs the user's query in the background using Bing, then feeds the obtained information to the AI model to generate a response
Basically, it uses an LLM-based model to create a prompt for a conventional search engine, does the search, finds answers and summarises what it's found in natural language, with links back to sources. But it also has a system it calls (confusingly) Copilot, which provides a more interactive and personalised search experience. It leverages OpenAI's GPT-4 model to guide users through their search process with interactive inputs, leading to more accurate and comprehensive responses.
Copilot is particularly useful for researching complex topics. It can go back and forth on the specific information users need before curating answers with links to websites and Wolfram Alpha data. It also has a strong summarisation ability and can sift through large texts to find the right answers to a user's question.
This kind of back-and-forth is obviously costly (especially as Copilot queries use GPT-4 rather than the cheaper GPT-3.5). To manage demand and the cost of accessing the advanced GPT-4 model, Perplexity AI limits users to five Copilot queries every four hours, or 600 a day if you are a paid for “Pro” user.
If you're not using Perplexity for research, I would strongly recommend giving it a go. And if you work for Google, get on the phone to Larry and tell him your company might need to spend a lot of money to buy Perplexity.