Depending on who you believe, the last few weeks are either the beginning of an AI revolution or armageddon. On the one hand, Bill Gates thinks the advances made are comparable to the introduction of the graphical interface for computers. On the other, a long list of people have signed a letter asking companies to slow down until the risks of artificial intelligence can be understood and mitigated.
One thing that everyone does agree on is that the past few weeks have shown an astonishing leap in technology. The impressive image generation of DALL-E 2 caught the public’s imagination, but the release of ChatGPT 3 was what sparked headlines. Suddenly it felt like we were interacting with something that, for the most part, understood us and made Siri’s monotone responses seem scripted and mechanical.
(As an aside, I’m really curious to see where Apple goes with this. They have a habit of waiting and reinventing things to be incredibly user-friendly, whether it’s a smartphone or an MP3 player, so it will be interesting to see what they do with AI.)
Tom Scott’s analysis of the situation summarised things perfectly. He describes any significant innovation as being on an S-shaped curve but highlights the problem that we don’t know precisely where on the curve we currently are. If we’re at the top, we’ll get some neat tools out of the new technology, but if we’re at the bottom, everything will change.
For the sceptics, the hype has become self-perpetuating. As the public pays attention, tech companies scramble to announce their own competitors. As more companies announce products, there is a greater sense of seismic change in technology. Matt Wolfe has some excellent summaries of the sheer number and pace of announcements over the last few weeks.
The question I’ve encountered often has been, “what’s the point?” - where does all this technology lead, and what can we use it for?
Plenty of companies are trying to integrate AI into their offerings. For example, Notion has launched Notion AI - a way to generate documents based on reasonably limited input. Although that certainly fits with the initial uses of ChatGPT, I can’t help but think of Tim Ferris’s 4-Hour Work Week.
I read The 4-Hour Work Week some months ago and wasn’t totally convinced. It’s extremely well-written and has interesting and practical ideas that have been well-tested. Still, his recommendations around “outsourcing life” - using remote executive assistants for everyday tasks - weren’t something I could quite reconcile.
I think this is where AI has immediate potential - to bring executive assistants to everyone. It goes far beyond the virtual assistants of Siri, Alexa and Google Assistant. For example, it might actively manage your calendar or remind you of someone’s birthday, suggesting suitable gifts (and then being able to order them to the correct address at the right time). There are plenty of legitimate privacy concerns, so I expect a renewed focus on making models secure, perhaps containing them to devices (Stanford’s now-closed Alpaca, for example). Once those problems are surmounted, though, I suspect we’ll give them access to the same information as we would any executive assistant. That’s when we get to the really interesting question.
If AI enables that, what would people do with the time they get back?