Neon: Conversational Virtual Humans

Realistic, realtime, and conversational digital humans — the workforce of the future.


The Challenge

  • Reimagine team and product for Neon, the GenAI subsidiary of Samsung.
  • Create a new culture of innovation to accelerate product evolution.
  • Evolve digital humans to achieve the necessary level of performance and realism.
  • Drive GTM and prove market acceptance.

Outcomes

  • Redeveloped product to increase product TAM by ~42x.
  • Bootstrapped vertical competency in conversational GenAI/LLMs
  • Created a disciplined culture of ML, AI, and product while growing team 6x.
  • Drastically improved model iteration time, inference efficiency, and visual realism.
  • Drove GTM from zero to release via intensive greenfield customer needs discovery.

The Story

In 2020, we spun off Neon as an independent subsidiary of Samsung Electronics to create a GenAI product to lead Samsung into the AI-based future. In 2022, the Chairman’s office tapped me to take over the organization and retarget it for the goals that Samsung expected from its efforts. This led to a business and cultural transformation that focused on greater speed and efficiency, stronger alignment to market needs, and a world-class product that was positively received by some of the largest brands in the world.

Transformation Trifecta

In building Neon Assist, I approached Neon’s transformation from three perspectives: strategy, technology, and product.

Strategy

While Neon initially focused on creating interactions for limited use cases such as banks, a professional evaluation showed this to be too limiting in scope. To improve the addressable market, I espoused a process of conversations with customers and analysis of possible applications to pivot to providing widely available and cost-conscious workforce replacement.

Product

As we started improving visual quality, it became clear that the product was incomplete. While visual quality improved, the underlying conversational AI sourced from third parties was slow and inadequate. As a result, we became a very early adopter of LLMs, addressing many of the problems plaguing the initial launch of ChatGPT. We created a product that provided end-to-end conversational capabilities customized to the client and efficient internals that allowed the system to respond not just correctly but quickly and with a human cadence.

Technology

As with many contemporary machine learning models, the GenAI models powering Neon were too heavy when I took over to operate at a price point that the applications could support. To reduce capital expenditure, we set a target of a 5x reduction in inference costs.

I created a disciplined ML, AI, and product iteration process that reduced model architecture iteration time by 12x and reduced the cost of inference >10x by thoroughly understanding and exploring options and the effects of various model components and architectural choices on visual realism.

Outcomes

These changes allowed Neon to target large-value verticals effectively, leading to public customer-facing deployments with top global brands.

Imagery and video (c) Neon 2024.

More…

Search