A latest report based on the survey of artificial intelligence (AI), machine learning (ML) and data practitioners found that 96 per cent respondents agree that human expertise is a key component to their AI efforts.
The 2023 State of ML Ops report found an increasing need for better data quality and human expertise and oversight in delivering successful AI.
While automated data labeling has been on the rise, the report found that 96 per cent respondents believe human labeling is important to the success of their ML/AI data models. In fact, 86 per cent call it essential and currently leverage human labeling at scale within their existing data labeling pipeline.
Data labeling is the act of adding meaningful labels to raw data to provide context for AI applications. For example, a label can indicate whether a photo or a video consists of an elephant or a BMW 5 Series car. Hence, accuracy in labeling is critical for AI applications.
Currently, 42 per cent of automated data labeling requires human intervention or correction.
Three in five AI/ML practitioners surveyed said higher quality data was more important than higher volumes of data for achieving successful AI. But key to this was the finding that linked accurate and precise data labeling to realising return on investment (ROI) – which would have to involve human intervention.
In a statement, Radha Basu, founder and CEO, iMerit said, “Data must be more reliable and scalable for AI projects to be successful. Large language models and generative AI will become the foundation on which many thin applications will be built. Human expertise and oversight is a critical part of this foundation.”