Call us on +44 20 3834 9721

AR comes in many flavours. This is true of any technology in early stages as it twists around and takes shape. The most prevalent format so far is social lenses, as they enhance and adorn sharable media. Line-of-sight guidance in industrial settings is also proving valuable.

But a less-discussed AR modality is visual search. Led by Google Lens and Snap Scan, it lets users point their smartphone cameras (or future glasses) at real-world objects to identify them. It contextualises them with informational overlays… or “captions for the real world.”

This flips the script for AR in that it identifies unknown items rather than displaying known ones. This makes potential use cases greater – transcending pre-ordained experiences that have relatively narrow utility. Visual search has the extent of the physical world as its canvas.

This is the topic of a recent report deep dives into the whatwhy, and who of visual search. 

Naturally Monetisable

Many of the visual search use cases examined in the previous part of this series have one thing in common: shopping. The endgame is monetisable visual searches for shoppable items. This can be seen in use cases developing at Snap, such as local discovery and “outfit inspiration.”

The thinking is that visual search is naturally monetisable because of the lean-forward commercial intent that’s inherent in its activation. Actively holding up one’s phone to identify real-world items flows naturally into transactional outcomes, making it a natural fit for brand marketing.

Amplifying these benefits is another big factor: Generation Z. They have a high affinity for the camera as a way to interface with the world. And this will only grow as Gen-Z gains purchasing power and phases into the adult consumer population. The oldest Gen-Zers are almost 30.

Lastly, all the above accelerated in the Covid era. This goes for general digital transformation and the rise of eCommerce during Covid lockdowns. But it also applies to the post-Covid era, when we’ll see lots of technologies that blend the physical and digital. This bodes well for visual search.

So if we synthesise visual search’s benefits, they include…

– Easy to comprehend
– Tangible utility (like web search)
– Broadly applicable (like web search)
– High-frequency (like web search)
– Broad appeal
– High-intent use case (monetisable)
– Gen Z-aligned
– Covid-accelerated
– Google-accelerated

By the Numbers

All the above advantages have been factored into ARtillery Intelligence market sizing around visual search’s revenue. But first, how will it be monetised? In short, it will be similar to web search in that companies will bid for placement in sponsored results that accompany organic results.

Of course, the dynamics will be different than web search in that results pages (SERPs) won’t include several ad slots. There will be limited ad inventory. That scarcity, plus visual search’s high-intent use cases outlined above, will mean that sponsorship carries a premium.

With all that in mind, ARtillery Intelligence projects in its recent mobile revenue forecast that visual search will grow from $166 million last year to $2.26 billion in 2026. Though it’s under-monetised today, it will grow to a leading share of mobile AR ad revenue by 2026.

Why is it under-monetised? It’s not as established as other AR ad formats such as social lenses. Visual search players like Google are still experimenting with the right UX and consumer traction before they flip the monetisation switch. It’s already seeing 10 billion monthly searches.

Speaking of Google, it will dominate visual search, just like web search. But there will be ample opportunity for more specialised players like Snap Scan and Pinterest Lens. They will focus on a narrower set of focused and high-value use cases such as fashion and food discovery.

Credit: https://arinsider.co/2023/04/25/whats-the-business-case-for-visual-search/