Last week, I spent some time chatting with Tom Siebel, the billionaire entrepreneur who runs C3.ai, which sells big data analytics and A.I. software to some of the world’s largest businesses. It counts Royal Dutch Shell, Bank of America, 3M and the U.S. Air Force among its customers.
Siebel is known for being outspoken and, in our wide-ranging discussion about today’s state of A.I., he didn’t disappoint.
On pandemic data:
“No one here has any idea what they are talking about,” Siebel says of the U.S. response to Covid-19. And he includes many of the world’s leading epidemiologists and infectious disease researchers among “no one.” The problem, he says, is that the coronavirus response has become so politicized and polarized that it is impossible to find an objective analysis of the data.
Siebel enlisted C3.ai in creating the world’s largest Covid-19 “data lake”—essentially a giant compendium of datasets. The lake, hosted on AWS, contains 22 big datasets, including the Allen Institute for Artificial Intelligence’s set of papers that my colleague Jonathan Vanian highlighted in this newsletter a few weeks ago as well as data from the World Health Organization, World Bank, U.S. Census Bureau and Johns Hopkins University.
Unlike some similar attempts at pooling coronavirus data, C3.ai created what is called a “unified, federated” dataset. Using automated tools it originally developed to handle the massive datasets of customers, C3.ai spent about four weeks doing the hard work of making sure all 22 datasets were readable by machine, cleaned up and combined into a single knowledge graph that is searchable, with queries pulling data from across the entire lake.
The data lake, which C3.ai is making freely available to researchers, is already being used by teams at MIT and the Federal Emergency Management Agency to analyze supplies of protective equipment and coronavirus tests in different regions, and by researchers at Arizona State University to look at media coverage of the pandemic and its effect on social psychology.
Siebel says he hopes it will bring some ground truth to discussions of the pandemic. “So the next time someone says something they will know what the hell they are talking about instead of listening to this, candidly, idle speculation,” he says.
On the coronavirus wreaking havoc on A.I. models:
Siebel says this might be true for a few systems—like those trying to predict purchasing behavior based on historical data—but he is convinced Covid-19 is massively accelerating digital transformation, including the adoption of machine learning. (Jonathan and I have previously discussed the issue in this newsletter.)
Take the oil majors, many of whom are his customers. “They have two choices: They can reinvent themselves and figure out how to make money at $26 per barrel oil or go out of business, it is that simple.” He says C3.ai has seen its “pipeline of opportunities” quadruple compared to the same period last year, and he predicts it will triple again by next year.
At the same time, Siebel sees a big shakeout coming for A.I. startups—many of which, he says, are not real businesses. He predicts eight out of ten will go bust. “You clear the silliness out of the market,” he says. “All the Masa Sons are gone.” (Siebel is no fan of SoftBank CEO Masayoshi Son’s $100 billion Vision Fund, which he says was the “poster boy” for irrational excess in tech investing over the past five years. Vision Fund recently posted a loss of $18 billion in the year to March.)
On working with the Department of Defense and Big Oil:
Unlike some A.I. engineers and executives, Siebel has no issue working with Big Oil or the Pentagon. Many oil companies are trying to go green, he says, and machine learning can help them transition.
As for the military, he says it is the height of hypocrisy for any Google engineers and executives, “many of whom are my friends” but who owe “their big homes in Atherton” to the development of the Internet and GPS satellites, both of which he says the U.S. Department of Defense supports, to refuse to work with the military.
“We are proud to work with the governments that stand for democracy and individual liberty and happy to work with energy companies,” he says, and he doesn’t mince words. “If someone is not OK with that, fine, go work with Google or Facebook, and consort with the Chinese and manipulate 13-year-old girls into being anorexic and suicidal.”
Siebel is known for his wariness over doing business in China, citing concerns over “state-sanctioned IP theft.” Google has made multiple attempts to gain footing in China, including through its controversial Project Dragonfly. (And yes, frequent Facebook use has been linked to eating disorders in young women—though the company is cracking down on related content.)
On A.I. ethics:
Siebel thinks it is hypocritical for companies like Google and Facebook to tout any adherence to A.I. ethics. These companies, he says, use machine learning to monetize people’s personal information and, through social media, feed addictive behavior and mental-health issues. He notes their platforms have been used for political disinformation and election meddling. What’s ethical about that?
And on government regulation:
“I think if governments do not get involved in regulating A.I., we will be very sorry, because we are going to have to live there,” he says. “I am not a big government guy at all. But there is a role for government here.”
And with that, here’s the rest of this week’s A.I. news.