
Shawn Shen asserts that for AI to thrive in the physical realm, it must have the capability to remember what it observes. Shen’s venture, Memories.ai, employs Nvidia AI technologies to establish the framework for wearables and robotics to retain and retrieve visual recollections.
On Monday, at its GTC conference, Memories.ai revealed a partnership with the semiconductor powerhouse Nvidia. This collaboration allows Memories.ai to leverage Nvidia’s Cosmos-Reason 2, a reasoning vision language model, along with Nvidia Metropolis, a tool designed for video search and summarization, to further enhance its visual memory capabilities.
Shen (shown above left) informed TechCrunch that he and his co-founder and CTO, Ben Zhou (depicted above right), conceived the idea for the company while developing the AI behind Meta’s Ray-Ban sunglasses. The creation of these AI glasses prompted them to consider how users would effectively engage with the technology in real-world scenarios if they could not remember the recorded video data.
They searched for existing solutions focused on that specific visual memory aspect for AI but, finding none, made the decision to depart from Meta and create the solution independently.
“AI is excelling in the digital arena. How about in the physical space?” Shen remarked. “AI wearables and robotics require memories too. Ultimately, visual memories for AI are essential. We envision that future.”
The capacity of AI systems to retain information is a relatively recent development. OpenAI enhanced ChatGPT to start remembering prior conversations in 2024 and refined that capability in 2025. Elon Musk’s xAI and Google Gemini have both introduced their own memory functionalities over the last two years.
However, Shen pointed out that these developments primarily concentrate on text-based memory. Text-based memory is more organized and simpler to index but falls short for physical AI applications that predominantly engage with their environment visually.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
Founded in 2024, Memories.ai has secured $16 million to date, garnered from an $8 million seed round in July 2025 complemented by an $8 million extension. The funding round was spearheaded by Susa Ventures and included participants such as Seedcamp, Fusion Fund, and Crane Venture Partners.
Shen indicated that creating this visual memory layer successfully necessitated two elements: establishing the required infrastructure to assimilate and organize videos into a data-friendly format for storage and retrieval, along with gathering the necessary data to train the model to achieve that goal.
The organization unveiled its large visual memory model (LVMM) in July 2025. Shen mentioned it as a smaller counterpart to Gemini Embedding 2, a multimodal indexing and retrieval model released earlier this month.
For data collection, the company developed LUCI, a device worn by its “data collectors” that captures video to train the model. Shen stated they do not intend to evolve into a hardware company or market these devices, but rather created their own as they were dissatisfied with existing high-definition video recorders that consumed too much battery.
The firm has rolled out the second iteration of this LVMM and has partnered with Qualcomm to operate on Qualcomm’s processors starting later this year.
Memories.ai is collaborating with some major wearable companies already, according to Shen, though he refrained from naming them. Despite current demand, he anticipates even greater potential in wearables and robotics in the future.
“In terms of commercialization, we are concentrating more on the model and infrastructure, because we ultimately believe the wearables and robotics market will emerge, but it may not be immediate,” Shen explained.

