The AI startup had no AI
Albert Saniger, now former CEO of shopping app Nate, has been charged with defrauding investors, based on the latest Mechanical Turk scam. He boasted how artificial intelligence (AI) technology could streamline the online shopping checkout process to “single tap,” though it actually relied on human workers from the Philippines, as alleged by the United States Department of Justice (DOJ).
Saniger, who raised over $50 million based on their technology, claimed the platform had a 93% to 97% automation success rate, but its actual success rate was 0%, according to the DOJ.
The case represents a misguided effort to remain relevant in today’s evolving tech landscape. Startups slap “AI” beside their company’s name or marketing spiel to raise a couple of million dollars and be part of the cool club of tech savants.
But the reality is they’re still putting the pieces together and have not completed their product’s schematics. Afraid to get left behind by the AI bandwagon, they exaggerate data in their pitches and deliberately brag about automation capabilities, though there’s actually none.
The ethical issue speaks for itself. Startups like Theranos, FTX, WorldCom, Enron, and others have employed similar unethical schemes. These missteps have adversely affected the reputation of million-dollar enterprises and have set a bad precedent for the honest bunch.
Why Turk?
For the historian nerds, the original Mechanical Turk was the OG AI robot – a chess-playing automaton created by Wolfgang von Kempelen in 1770. Designed to look like a robed figure seated at a chessboard, it amazed audiences by seemingly playing chess on its own. In reality, it was a clever illusion: a skilled human player was hidden inside the cabinet, controlling the machine’s moves.
The offshore and AI connection
The Nate fiasco has put outsourcing in a bad light. The reality is that offshore workers have been great partners in AI’s development. Amazon’s Just Walk Out — a shopping technology that eliminates cashiers and long queues — was trained by 1,000 outsourced workers from India. The same goes for OpenAI, which was trained by 1,000 offshore Kenyan workers.
Elon Musk’s xAI was on the hunt for remote workers, particularly those fluent in English and at least one other language, such as French, Chinese, Arabic, or Hindi—to train language algorithms.
Nascent AI needs human training to be fully functional and safe. It needs massive data sets, the proper model and training method, and a rigorous evaluation process. These tasks can only be accomplished by a human worker — so far.
Demand for AI-related consulting services has skyrocketed over the past years, as institutions and industries seek to understand how the new age tech can help them. Consulting firms have rapidly expanded their portfolio across offshore markets like Asia and Africa.
Their affordability compared to their Western counterparts is just a small part of the equation. These markets have a broad talent pool that’s intellectually fit for the AI race’s taxing demands.
The Nate case is not representative of how an outsourcing pact works. Outsourcing’s tenets are built on trust, honesty, integrity, and an uncompromising quest for excellence. The top corporations in the world work with offshore professionals to improve their market positions, not to fool their stakeholders.
The question for your business
How have offshore professionals helped in your AI journey?