In an August 2024 report from Deloitte, 67% of surveyed businesses said they are increasing investment in generative AI. That’s a great statistic, until you read further: “7 out of 10 respondents also admitted their organization has moved 30% or fewer of their GenAI  experiments into production.
                    
                    
There’s a scene in the 2007 Paul Thomas Anderson movie “There Will Be Blood” where Daniel Day Lewis’s character bellows at some scared prospectors working on an oil rig, “What are you looking so miserable about? There's a whole ocean of oil under our feet! No one can get at it except for me!”
That’s perhaps one way to describe what’s going on with AI: we’re at the start of an oil-boom of sorts with the advent of artificial intelligence.
But nobody knows what to do with the “oil” yet. Is generative AI a solution without a problem?
If you go by the numbers, it is.
In an August 2024 report from consulting colossus Deloitte, 67% of surveyed businesses said they are increasing investment in generative AI. That’s a great statistic, until you read further down the report: “7 out of 10 respondents also admitted their organization has moved 30% or fewer of their GenAI experiments into production.”
A huge part of this might also be that generative AI is fed by LLMs—Large Language Models—that are expensive to license and could potentially contain copyrighted material. Huge parts of the existing internet have been scraped to make these massive models, and major lawsuits are emerging because of it: horror author Stephen King’s 
case against ChatGPT being probably the highest profile lawsuit yet.
Lawsuits like this are still relevant even if you didn’t write 
Cujo: by scraping so much of the available internet, this means that your 2004 Livejournal post about why the music of My Chemical Romance “really speaks to me” might be making a comeback, in a roundabout way, by being a tiny part of someone’s autogenerated email reply.
That’s the thing: the problems that AI seems to be solving for end consumers—at least at this current point in time some 2 years out from ChatGPT’s release—are quite small. Apple’s new Apple Intelligence can do neat things like look through your emails for flight times, finish that boring email you’ve been putting off writing, or even make reservations based on a picture you take of a neighborhood restaurant (Or you could—hear us out, Sharks—go 
into the restaurant and do so in person).
But while AI shines in certain situations—scanning massive piles of data and making sense out of patterns and outliers that humans might miss due to the sheer scale of data—its value hasn’t shown up for consumers. Yet.
There’s an old adage that perhaps the fastest and most assured way to success during a goldrush is to be the one selling the pickaxes. Take Nvidia, who specialize in making the type of high-grade computer chips needed to manufacture AI-related hardware. At time of writing, they are currently worth $3.3T (that’s trillion). Just 5 years ago, they were valued at $144B. If you do the math—which we did, thanks to AI—that’s an astonishing 22x or 2,191.67%.
Because of this astonishing unicorn, there are now thousands of venture capitalists pumping money into anything with AI in it. OpenAI, the biggest (and loudest) AI company thanks to their ChatGPT product, is looking to go public with an initial $150B valuation…but the company 
can’t seem to hold on to its own employees as they switch from a non-profit to a (very) for-profit business model.
Perhaps the biggest wet blanket on the AI party is existing human intelligence counterprogramming everything that AI enthusiasts seem to say. Take the oft-used “it’ll increase productivity!” line bandied about by Silicon Valley fanboys. While OpenAI czar and CEO Sam Altman promises that AI will help productivity so much that it’ll actually make a form of Universal Basic Income (which he called ‘Universal Basic Compute’ — CATCHY, MY DUDE!), MIT economist Daron Acemoglu argues i
n a May 2024 whitepaper that AI won’t increase productivity by much at all, a paltry 0.5% at best, and 1% increase in American GDP year on year.
And…about that whole “it’ll make business better” line, too… 
a survey by Boston Consulting Group found that “for business problem solving,” using the most advanced version of OpenAI’s ChatGPT “resulted in performance that was 23% lower than that of the control group.”
Also worth noting: AI also massively increases energy and water use. To write a 100-word email, 
ChatGPT “consumes” on average about 4 bottles of water (long story short: the servers need cooling, cooling needs water, and it can be more or less depending on proximity to data centers and even time of day). This adds up: 
just one Microsoft data center in Arizona primarily used for ChatGPT consumes 56 million gallons of water a year.
It’s worth mentioning that in 
There Will Be Blood—spoiler alert for a 17 year-old movie here—the main character of Daniel Plainview gets his wish of becoming incredibly wealthy through digging deep to find something valuable: in this case, oil. Yet he manages to completely estrange himself from everyone around him. It’s no small secret that AI makes people lonelier: in one study 
cited by the Harvard Business Review, people who used AI for work ended up with increased insomnia and intake of alcohol.
Watching certain people get obscenely wealthy off the backs of artists and engineers just might make these AI companies the Daniel Plainviews of tomorrow.
                    
 
                  
Comments (0)