There are several players working on tools to gauge emotions and user engagement. What sets you apart?
While emotion AI is a nascent and emerging market, one segment of it, that is conversational AI companies have been around for some time deployed in a variety of sectors and industries. What truly sets Lightbulb apart is that we are a visual AI first company which is a truly unique skill set and available with very few companies globally.
Our ability to also process multimodal inputs allows us to attack conventional use-cases while using visual AI as a strong differentiator from the offerings of other companies.
Additionally, we believe that we are a horizontal platform that powers vertical solutions and therefore can be extremely flexible in our approach and our problem-solving abilities, allowing us to tap a wide variety of industries and segments for business.
This is distinct from current businesses who are deeply verticalised and focus only on one use case and industry.
How has your product developed over time?
When the initial version of Lightbulb launched as an MVP in September 2021, we were primarily a visual first company with strong competencies in visual AI but over the last few months as we have interacted with potential clients and moved from pilot engagements to paying customers, we have built out and refined our abilities in conversational AI and now also provide speech transcription and sentiment analysis as a service.
Our multimodal emotion AI systems allow us to now mine any remote interaction in multiple ways to provide a comprehensive and detailed emotional and engagement map for enterprises to draw contextual insights from. This technology can therefore be deployed across both audio and video interactions in a wide variety of use cases and industries.
What industry do you feel would benefit the most by tapping into your technology?
While practically every industry can utilise emotion AI as a construct, given that all customers are – at the end of the day – human and responsive to emotion, some key industries and segments are driving the current traction for emotion AI companies such as online learning, consumer research, sales enablement, gaming and live entertainment.
As the world goes steadily online and we live parallel lives as digital avatars, the ability of metaverse systems to accurately detect and portray human emotion in a digital ecosystem will be a highly priced ability to have and that is really where the ultimate play for emotion AI will lie.
Could you tell us about the technology powering your platform?
Lightbulb’s technology – with 4 patents in the pipeline – is based on unique ML models for computer vision, speech transcription, and audio-data mining. We operate only with explicit permission-based and opt in models to collect video and audio data and analyse it in real-time to provide businesses with real-time alerts on user emotions and engagement as well as post facto reports that provide a second-by-second emotion and engagement mapping for a user during a remote interaction, either live or a synchronous. We also provide contextual insights to remote-first enterprises to enhance critical business metrics.
Our systems are trained across massive datasets that span ages, gender, ethnicities, and geographies to deliver a genuinely global solution.
What is the accuracy of your platform? How do you plan to improve it?
Lightbulb's visual AI technology is quite robust, with accuracy levels that compare favourably with top industry giants. With over 4 million faces scanned over diverse races, regions, cultures, age groups, and genders, Lightbulb.ai has four patents in the pipeline across different segments, including online learning, market research, and sales enablement, despite its youth. For ease of consumption of our services, we offer not only SDKs/APIs for businesses to interact with but also a stand-alone web app that allows individuals and small businesses to harness the power of emotion AI.
The success of any machine learning-based model is purely dependent on the quality and quantity of the datasets on which it is trained. It is extremely critical for machine learning-based businesses to acquire and consistently train their data sets to eliminate bias is as well as provide balanced learning models to interpret data correctly.
In order to improve the accuracy of our systems, we constantly anonymise and use the data captured during actual user interactions to refine our system and models. This also provides a strong entry barrier for other businesses to catch up with our accuracy levels as we are training or model on actual user data and this gives us a very strong headstart.
How has your team grown since last year? What are your plans for hiring?
Over the last six months, we have grown from 7 to 14 people but continue to remain a small and deep tech-focused team that believes in strong gains per employee and in building deep and enduring IP across our business segments.
Additionally, we’re hiring extensively across various verticals like product, consumers insights, sales and marketing and technology to build a truly world-class team.