Search, and Such

I read the other day that YouTube is the second-largest search engine on the internet. It achieved that distinction back in 2014. This got me thinking about how searching for relevant content has changed and will continue to change as the content we are searching for increasingly takes on new and novel formats.

Search is simply the access point to a database of information. Search engines provide a public interface to connect users to relevant results in that database. The database is sorted and ranked by the search engine’s proprietary algorithms, which do their job without much thought or notice by the end user. I’ve simplified this relationship below.

Public interface ↔️ sorting algorithms ↔️ database

For a long time, search engines simply connected users to websites that contained specific text keywords. As digital content is increasingly created in non-text formats (images, videos, health data, geospatial, etc), how does the role of the search engine change? If search is simply a way to index and find relevant results from a database, how does the search engine change as that database increasingly includes non-structured data?

800-Pound Gorilla

Google has dominated traditional search for so long that it’s hard to remember the time when it wasn’t the most dominant player in that space (remember Dogpile?).

While the gritty details aren’t important, it’s good to know that Google indexes the internet by crawling from page to page, which it does by following hyperlinks from one website to another (Google gives a pretty good overview of this). Google then indexes the pages it finds and gives preference according to a bunch of factors.

The takeaway is that Google’s success is based on its insight that the hyperlink (the blue underlined text on this page are hyperlinks) provides the backbone for an index of the internet. Hyperlinks provide relational context that keywords don’t. But what happens as people increasingly search native content that doesn’t contain hyperlinks?

The digital world is increasingly infringing upon the purely physical world. As the information we desire moves away from traditional fixed data structures, search engines like Google will need to synthesize and connect data across all aspects of humanity’s collective digital lives. As this change occurs, search will change in a number of key ways.

Images and Products 

Ryan Dawidjan of Clarifai posted the image below in his Medium post about images as the universal input:

If you haven’t heard, there is a lot of research and excitement around technology that allows machines to understand the objects and context within an image. The technology underlying these innovations is called machine learning, or ML (there are some good answers on Quora that explain ML). In the above example, Clarifari’s ML technology has identified the individual and collective themes in this digital picture. ML is actively being applied to a bunch of different problems not related to images, but I think imaging is one of the most exciting areas.

Images, as they appear on our screen, are simply a collection of pixels. It is the human brain that is responsible for taking input from our eyes and determining the objects, and their context, in a collection of these pixels. To a computer, an image is just data without any unifying meaning or themes. Each pixel is simply a group of numbers that are expressed in bits.

It’s not difficult to imagine that we can apply this technology to every single image available digitally. What will we learn when we can sort every image ever taken by location, time, and content? I think it’s difficult to even try to understand the potential things we will be able to learn when this technology reaches mass scale.

Google has already made incredible strides here – Google’s AI can identify the content of images with rapidly increasing accuracy. Google has a massive database of images that it can draw from and gains millions more every day (why do you think Google Photos offers free unlimited storage?). Google has also started using ML in its own live products, with remarkable success.

However, Google’s weakness here is not owning the camera interfaces that people will increasingly use to drive decisions around purchases.

Pinterest launched an interesting product the other day. Pinterest Lens is a tool within the Pinterest app that identifies the elements in a picture. Users can either point their camera at an object/scene or use an image stored in memory. Pinterest analyzes the image and identifies the key components – both products and themes. Imagine walking through a mall and seeing an outfit you liked – with Lens, a user can simply point their camera at it and see similar styles from across the web.

lens-fashion-resultse28093gif.gif

People will increasingly desire to connect spontaneous urges with the appropriate content across a wide range of services. Image search will continue to take hold as mobile services become more capable of handling on-the-fly machine learning analysis. Product search will move away from the fixed screen of your laptop or smartphone and into the physical world.

Currently, Amazon is the search platform for products. In fact, a majority of consumers now start their product searches on Amazon and not Google. [Side note – that research report also says that 16% of consumers start their product searches IN STORE, which seems bizarre to me]

bezos_laughing.0.jpg

What will happen to companies like Amazon and Google that rely on the data and ad revenue stream from text-based product searches on desktop and mobile? These incumbent businesses will be threatened by new entrants who are able to connect spontaneous urges with the location, time, and object data from a user’s smartphone (or smart glasses) camera. Technology like ML combined with new interfaces built around the camera will create structured data around all objects in the physical world.

Abstract Queries and Our Digital Lives

Traditional search will always be the best for querying about specific keywords. But what about keywords imbued with additional dimensions of information?

For instance, what about the query ‘news articles on snapchat in the last week from outside the US’? Google would interpret that as one string of text, despite the fact that I’ve inserted two operators that aren’t related to the specific content of my search. ‘Last week’ adds a time dimension and ‘outside the US’ indicates location. I admit that news is a tough example – aggregators like Apple News and Google News already provide ways to sort based upon these categories.

Ben Evans brings up the possibility of using adjectives in our queries:

Tech like ML will provide new layers of context in existing databases, and users will increasingly begin to use natural language to search for content connected by more than just hyperlinks. After all, language is more than just the literal meaning of the words we use. This will be coupled with the increased used of voice as an interface to our devices à la Siri, Alexa, and Google Home. Companies that have access to a steady source of text and voice queries with which to train their ML models are well-positioned to stay ahead of this trend. But what about all the other places we process information across our digital lives?

I’ve started using a service called Atlas Recall that indexes and sorts all of my digital activity. Atlas Recall aims to keep track and index everything a user does digitally (the data is always encrypted, FYI). By doing this, Atlas aims to unite the information we share across all the disconnected silos of our digital activity. It has quickly become indispensable to me – Atlas takes the burden of remembering where a piece of information is so I don’t have to.

This is another one of Google’s weak points. The largest tech companies these days are platforms – Facebook, Amazon, Apple (iOS/Mac), and Salesforce are some examples. Google does not get to the access the data shared on these platforms. There are two separate forces at work here: increasing numbers of users on these platforms and increasing services provided by these platforms. The combined effect is that the velocity and quantity of data produced across these platforms are increasingly exponentially. It’s always the case that these services can sell user data to ad providers like Google, but I see an increasing opportunity for services like Atlas to provide a tool that indexes and manages all the information across these separate information silos.

Conclusion 

To summarize, I see three main forces influencing our search activity in the near to mid-term

  1. Increased content generation in non-traditional formats across a range of new platforms
  2. Physical world and point-of-sale search driven by the camera
  3. Commoditized ML tech to handle abstract text and voice queries across all of our digital lives

Consumers will benefit greatly from being able to query information from the physical world and all corners of their digital lives. Companies will race to develop best-in-class ML services and will be limited by the data they have access to and interfaces they own.

If you like reading these posts, please be sure to subscribe to receive new posts by email. You can do that at the top of the page on desktop and at the bottom on mobile.

Notes

  1. YouTube as a Search Engine
  2. Google: How Search Works
  3. NYT: The Great AI Awakening
  4. Ryan Dawidjan: Images as the Universal Inputs
  5. Quora: What is Machine Learning?
  6. TechCrunch: Pinterest Lens
  7. TechCrunch: WTF is Computer Vision
  8. Bloomreach: State of Amazon
  9. Atlas Recall
  10. Benedict Evans: Cameras, ecommerce, and machine learning

Author: Ben

Numbers and words guy

One thought on “Search, and Such”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: