Apple’s Too Conservative Approach to Text Intelligence

Apple is, understandably, taking a conservative approach to artificial intelligence. Nowhere is this more obvious and product-crippling than its text intelligence features. I am a fan of using AI for an edit pass on my words. Specifically, I’ve come to rely on Grammarly and its ability to sniff out overused adverbs and otherwise general sloppiness in my writing.

I’ve been around long enough to recall when grammar checkers first started appearing in applications like Microsoft Word. They were useless. It was comical how often their recommendations went against the grammar rules and made your writing worse. It wasn’t until the arrival of Grammarly that I got back on board with the idea of a grammar checker, and it’s been quite helpful. Note that I’m not using artificial intelligence to write for me; I’m using it to check my work and act as a first-pass editor. The problem I’ve always had with Grammarly is that it sends my words to the cloud whenever I want them checked.

Ideally, I’d like that done privately and locally. That’s why I was so excited about Apple Intelligence and text intelligence. It would presumably all happen on the device or Apple’s Private Cloud Compute servers. Unfortunately, at least in beta, Apple Intelligence isn’t up to the task. That conservative approach makes Apple’s Text Intelligence useless to me in this editor role. While Apple’s tools can identify obvious grammatical errors, they fall short in the more nuanced aspects of writing assistance.

A telling example: As a test, I recently gave Apple Intelligence a paragraph where the word “very” appeared in three consecutive sentences — a clear style issue that any modern writing tool would flag. However, Apple’s text intelligence didn’t notice this repetition. That’s very, very, very bad.

This limitation reflects a broader pattern in Apple’s approach to AI tools. While the foundation is solid, the current implementation may be too restrained to compete with existing writing assistance tools that offer more comprehensive feedback on style and readability. The challenge for Apple will be finding the sweet spot between maintaining their caution and delivering genuinely useful writing assistance. I get the big picture here. I know they’re not trying to make a Grammarly competitor, but they need to take several steps away from that conservative benchmark if this is going to be useful.

Another problem with the text tools is the implementation of recommended changes. You can have it either replace your text entirely (without any indicator of what exactly was changed) or give you a list of suggested edits, which you must implement manually. Other players in this space, like Grammarly, highlight recommended changes and make it easy to implement or ignore them with a button.

Apple is famous for its ability to create excellent user interfaces, and I suspect they could do something similar but probably better if they put their minds to it. Unfortunately, the current version of the text intelligence tools in Apple Intelligence isn’t even close.

Another Siri?

Mark Gurman’s got another AI/Siri report and it’s a doozy. According to the latest rumors, Apple is cooking up an LLM-powered Siri for iOS 19 and macOS 16.

The idea is that this would be yet another Siri reboot, but this time built on Apple’s own AI models. Think ChatGPT or Google’s Gemini but with that special Apple sauce (and privacy-focused access to your on-device data).

Here’s where I get a bit twitchy, though. Apple has been tight-lipped about the details of its AI strategy, and it’s starting to wear thin. If this massive LLM overhaul is really coming next year, what exactly are we getting with the current “Apple Intelligence” features that are supposed to land this year?

If, after all the WWDC and iPhone release hype, we get through all these betas only to find that Siri is still struggling with basic tasks, and then Apple says, “But wait until next year, we’ve got this whole new system that will finally fix everything!” Well, that will be just a little heartbreaking for me.

Perplexity Pages

My experiments with Perplexity continue. This alternate search app takes a different approach to getting answers from the Internet. Rather than giving you a list of links to read, it reads the Internet and tries to give you an answer with footnotes going back to the links it reads. I think it’s a good idea, and Perplexity was early to this game. Google is now following suit to less effect, but I’m sure they’ll continue to work on it.

I recently got an email from Perplexity about a new feature called Perplexity Pages, where you can give it a prompt, and it will build a web page about a subject of interest to you. Just as an experiment, I had it create a page on woodworking hand planes. I fed it a few headings, and then it generated this page. The page uses the Perplexity method of giving you information with footnotes to the websites it’s reading. I fed it a few additional topics, and it generated more content. Then, I pressed “publish” with no further edits. The whole experiment took me five minutes to create.

The speed at which these web pages can be created is both impressive and, in a way, unsettling. If we can generate web pages this quickly, it’s only a matter of time before we face significant challenges in distinguishing reliable information from the vast sea of content on the Internet. In any case, I invite you to explore my five-minute hand plane website.

Hope Springs Eternal for Apple Intelligence

Yesterday, Apple announced its new name for artificial intelligence tools on its platform: Apple Intelligence. If you watched the keynote carefully, it was almost humorous how they danced around the term “artificial intelligence” throughout. At the beginning, Tim made reference to “intelligence” without the word “artificial. Then, throughout the rest of the keynote, up until the announcement of Apple Intelligence, Apple relied on its old standby, “machine learning.” Nevertheless, they eventually got there with the announcement of Apple Intelligence.

official Apple Intelligence text and iPhone image from their website after the june 10 2024 announcement.

The initial explanation was telling. They stated five principles for Apple Intelligence: powerful, intuitive, integrated, personal, and private. These principles are the foundation of what they’re trying to ship. Also, in Apple fashion, the term Apple Intelligence doesn’t refer to a single product or service, but a group of intelligence-related features:

Table Stakes AI This is the type of AI that everyone was expecting. It includes things like removing lampposts from picture backgrounds and cleaning up text. We already see multiple implementations throughout the Internet and in many apps already on our Macs. Apple had to do this.

They did, and the implementation makes sense. It’s got a clean user interface and clear options. Moreover, developers can incorporate these tools into their apps with little or no work. It should be universal throughout the operating systems, so learning how the tool works in one place means you can use it everywhere else. For most consumers, this is golden.

Also, it will be private. While I’m a paying customer of Grammarly, I’m aware that everything it checks is going to their servers. That means there are some things that don’t get checked. I’d much prefer to do this work privately on my device.

LLM AI There have been many rumors about Apple developing its own Large Language Model (LLM), but nobody expected them to have one competitive with the likes of OpenAI and Google. So the question was, is Apple going to ship something inferior, work with one of the big players, or not include LLM as part of this initiative? We got our answer with the partnership with OpenAI, which incorporates OpenAI’s 4o engine into the operating system.

This makes a lot of sense. Since the keynote, Craig Federighi has gone on record saying they also want to make similar partnerships with Google and other LLM providers. While nothing is going to be private sent to a company like OpenAI, Apple is doing what it can to help you out. It doesn’t require an account, and it gives you a warning before it sends data to them. Again, I think this is a rational implementation.

If you already have an OpenAI account, you can even hook it up in the operating system to take advantage of all those additional features.

Private AI

This was the most important component of Apple Intelligence and was underplayed in the keynote. Using the built-in neural engine on Apple silicon combined with Apple Intelligence, Apple intends to give us the ability to take intelligence-based actions that can only be accomplished with knowledge of our data. That bit is essential: Apple Intelligence can see your data, but more powerful LLMs, like ChatGPT, cannot.

That gives Apple Intelligence powers that you won’t get from traditional LLMs. Craig explained it with some example requests:

“Move this note into my Inactive Projects folder”, requiring access to Apple Notes. “Email this presentation to Zara”, requiring access to Keynote and Apple Mail. “Play the podcast that my wife sent the other day,” which requires access to data in the Podcasts and Messages apps.

While these commands aren’t as sexy as asking an LLM engine to write your college paper for you, if they work, they’d be damn useful. This is exactly the kind of implementation I was hoping Apple would pursue. Because they control the whole stack and can do the work on device, this feature will also be unique to Apple customers.

“AI for the Rest of Us”

During the WWDC keynote, I only heard the term “Artificial Intelligence” once. At the end, when Craig said, “Apple Intelligence. This is AI for the rest of us.” I think that sentiment summarizes Apple’s entire approach. I agree with the philosophy.

I’m convinced that Apple has considered AI in a way that makes sense to me and that I’d like to use it. The question now is whether Apple can deliver the goods. Apple Intelligence isn’t going to be released for beta testing until the fall, so now we just have promises.

Apple’s challenge is the way Siri lingered so long. You’ll recall that Siri, too, started with a good philosophy and a lot of promises, but Apple didn’t keep up with it, and Siri never fulfilled its potential.

Looking at the Siri example, I should be skeptical of Apple Intelligence and its commitment. Yet, I’m more hopeful than that. The degree of intentionality described yesterday, combined with the extent to which Apple’s stock price is contingent on getting this right, makes me think this time will be different. In the meantime, we wait.