Don’t Underestimate Apple’s Shot at On-Device Medical AI

There’s a rumor that Apple is working on an on-device medical AI. The idea is that your iPhone or Apple Watch could use its onboard silicon to privately analyze your health data and offer recommendations, without sending that sensitive information to the cloud.

The general vibe I’m seeing in response to this rumor is justified skepticism. Plenty of folks out there think there’s no way Apple can pull this off, but I think this is exactly the kind of thing they should be doing. This idea presents an opportunity for Apple.

Apple has been steadily building up its health tech for years. With features like Atrial fibrillation (AFib) detectionECG, and Fall Detection, they’ve proven they can deliver meaningful health tools. And they’ve done it with an eye toward user privacy and accessible design.

Now, imagine layering a personalized AI model on top of that foundation — something smart enough to notice patterns in your vitals, flag potential concerns, or even offer preventative guidance. And because Apple controls the hardware, they could run that AI model entirely on-device. That means your health data stays private, living only on your phone or watch, not bouncing around in the cloud.

Apple’s unique position here — owning both the hardware and the operating system — gives them access to a depth of personal health data that no off-the-shelf Large Language Model could ever touch. Combine that with their Neural Engine and you have a real opportunity to do something both powerful and private.

This also feels like a moment for Apple to make a statement with “Apple Intelligence.” So far, Apple’s AI initiative has been underwhelming and disappointing. This could be a way for them to reset expectations with something carefully designed, respectful of privacy, and genuinely useful.

Of course, this only works if they get it right. Rushing something half-baked out the door won’t cut it, especially when people’s health (and Apple’s AI reputation) is at stake. But if they take their time and nail the execution, this could be a defining moment for Apple’s AI efforts and one more key feature that saves lives.

I hope the rumor’s true and that Apple gives this the time and resources it deserves. It could be something special.

Apple Mail’s New Sorting Features

Apple’s latest operating system betas have finally brought the new Mail sorting and redesign features to iPad and Mac. While we’ve had time to experience these features on iPhone, their arrival on all platforms gives us a complete picture of Apple’s vision for email management.

apple mail window with no selected message, showing the new Apple Intelligence sorting feature with the Promotions label in red. No message is selected.

The response has been interesting. Power users generally aren’t impressed, arguing that web-based mail sorting tools and services like SaneBox offer far more sophisticated features. They’re right. However, I’ve noticed something different among casual users who have never experienced mail sorting before: they like Apple’s new email sorting.

I decided to experiment with this myself. I turned off all my fancy email sorting rules for my personal account and switched to Apple Mail’s new system. After some initial training, I’ve found it works surprisingly well. Sure, my MacSparky email still requires more advanced sorting that’s beyond what Apple offers, but for personal correspondence, this new system hits a sweet spot. Plus, there’s the added benefit of privacy.

This update represents a shift in Apple’s Mail development strategy. For years, they focused primarily on infrastructure improvements, making the app more stable and secure. It’s refreshing to see them adding new features again, even if they aren’t targeting power users. Not every feature needs to cater to the most demanding users, and sometimes simplicity, combined with privacy, is probably where Apple should be aiming.

Apple’s Too Conservative Approach to Text Intelligence

Apple is, understandably, taking a conservative approach to artificial intelligence. Nowhere is this more obvious and product-crippling than its text intelligence features. I am a fan of using AI for an edit pass on my words. Specifically, I’ve come to rely on Grammarly and its ability to sniff out overused adverbs and otherwise general sloppiness in my writing.

I’ve been around long enough to recall when grammar checkers first started appearing in applications like Microsoft Word. They were useless. It was comical how often their recommendations went against the grammar rules and made your writing worse. It wasn’t until the arrival of Grammarly that I got back on board with the idea of a grammar checker, and it’s been quite helpful. Note that I’m not using artificial intelligence to write for me; I’m using it to check my work and act as a first-pass editor. The problem I’ve always had with Grammarly is that it sends my words to the cloud whenever I want them checked.

Ideally, I’d like that done privately and locally. That’s why I was so excited about Apple Intelligence and text intelligence. It would presumably all happen on the device or Apple’s Private Cloud Compute servers. Unfortunately, at least in beta, Apple Intelligence isn’t up to the task. That conservative approach makes Apple’s Text Intelligence useless to me in this editor role. While Apple’s tools can identify obvious grammatical errors, they fall short in the more nuanced aspects of writing assistance.

A telling example: As a test, I recently gave Apple Intelligence a paragraph where the word “very” appeared in three consecutive sentences — a clear style issue that any modern writing tool would flag. However, Apple’s text intelligence didn’t notice this repetition. That’s very, very, very bad.

This limitation reflects a broader pattern in Apple’s approach to AI tools. While the foundation is solid, the current implementation may be too restrained to compete with existing writing assistance tools that offer more comprehensive feedback on style and readability. The challenge for Apple will be finding the sweet spot between maintaining their caution and delivering genuinely useful writing assistance. I get the big picture here. I know they’re not trying to make a Grammarly competitor, but they need to take several steps away from that conservative benchmark if this is going to be useful.

Another problem with the text tools is the implementation of recommended changes. You can have it either replace your text entirely (without any indicator of what exactly was changed) or give you a list of suggested edits, which you must implement manually. Other players in this space, like Grammarly, highlight recommended changes and make it easy to implement or ignore them with a button.

Apple is famous for its ability to create excellent user interfaces, and I suspect they could do something similar but probably better if they put their minds to it. Unfortunately, the current version of the text intelligence tools in Apple Intelligence isn’t even close.

Another Siri?

Mark Gurman’s got another AI/Siri report and it’s a doozy. According to the latest rumors, Apple is cooking up an LLM-powered Siri for iOS 19 and macOS 16.

The idea is that this would be yet another Siri reboot, but this time built on Apple’s own AI models. Think ChatGPT or Google’s Gemini but with that special Apple sauce (and privacy-focused access to your on-device data).

Here’s where I get a bit twitchy, though. Apple has been tight-lipped about the details of its AI strategy, and it’s starting to wear thin. If this massive LLM overhaul is really coming next year, what exactly are we getting with the current “Apple Intelligence” features that are supposed to land this year?

If, after all the WWDC and iPhone release hype, we get through all these betas only to find that Siri is still struggling with basic tasks, and then Apple says, “But wait until next year, we’ve got this whole new system that will finally fix everything!” Well, that will be just a little heartbreaking for me.

Apple’s Image Playground: Safety at the Cost of Utility?

As I’ve spent considerable time with Apple’s Image Playground in the recent iOS 18.2 beta, I’m left with more questions than answers about Apple’s approach to AI image generation. The most striking aspect is how deliberately unrealistic the output appears — every image unmistakably reads as AI-generated, which seems to be exactly what Apple intended.

The guardrails are everywhere. Apple has implemented strict boundaries around generating images of real people, and interestingly, even their own intellectual property is off-limits. When I attempted to generate an image of a Mac mini, the system politely declined.

Drawing a Mac mini is a no-go for Image Playground

This protective stance extends beyond the obvious restrictions: Try anything remotely offensive or controversial, and Image Playground simply won’t engage.

Apple’s cautious approach makes sense. Apple’s customers expect their products to be safe. Moreover, Apple is not aiming to revolutionize AI image generation; rather, they’re working to provide a safe, controlled creative tool for their users. These limitations however can significantly impact practical applications. My simple request to generate an image of a friend holding a Mac mini (a seemingly innocent use case) was rejected outright.

I hope Apple is aware of this tipping point and reconsidering as Image Playground heads toward public launch. At least let it draw your own products, Apple.

An Automation Golden Age

Did you know I have a newsletter? I post some, but not all of my newsletter’s content to this blog. Here’a a recent one.

An Automation Golden Age

I’ve mentioned several times on my podcasts that we’re experiencing a renaissance in automation, particularly on the Mac. This shift isn’t driven by a single tool but rather by the interoperability of a collection of tools.

AppleScript has been available on the Mac for decades, offering significant automation opportunities if you want to learn it. AppleScript allows users to connect applications and work with the same data to accomplish unified tasks. However, for many, learning AppleScript was a challenge. Programmers found it too different from traditional programming languages, and non-programmers struggled with its syntax. As a result, AppleScript adoption remained relatively small.

Apple and Sal Soghoian introduced Automator in early 2005 to address this, bringing drag-and-drop automation with its original version. Meanwhile, tools like Keyboard Maestro and Hazel, developed outside of Apple, have been actively filling the gaps in Apple’s automation solutions for years.

Then came Shortcuts ( Workflow). Initially developed for iOS, Shortcuts is now firmly embedded in the Mac ecosystem. It’s a spiritual (if not direct) descendant of Automator, and in recent years, these tools have learned to work together. You can run Keyboard Maestro macros from Shortcuts, and Shortcuts can be triggered from within Hazel. Users can now mix and match these tools to create robust automation chains, combining the strengths of each.

For those willing to invest the time to master—or at least gain a working knowledge of—these tools, few tasks on the Mac can’t be automated today.

The next big shift in this process is the integration of artificial intelligence. AI is already proving useful in helping generate automation, but if Apple Intelligence can fully tap into user data while still protecting user privacy and integrate it with Shortcuts, we could see a new era of powerful, personalized automation. This leap could be as significant as the jump from AppleScript to Automator. Of course, this depends on Apple getting both Apple Intelligence and the integration right, but I suspect this is already on the big whiteboard in Cupertino.

Shortcuts and Apple Intelligence both use the Intents system to work their magic. Developers who build for Shortcuts benefit from Apple Intelligence and vice versa. With this common architecture, I believe Apple will eventually tighten the connections between Shortcuts and Apple Intelligence. It won’t happen overnight, but over the coming years, I expect this combination to become the next frontier of automation in the Apple ecosystem.

Apple Intelligence Summarization

Because I’m a little nuts, I’m running the Apple Intelligence betas on all my devices. In my opinion, Apple Intelligence still has a ways to go, and these are early days, but we’re starting to get a peek at what Apple’s been working on. The more powerful Apple Intelligence features haven’t entered beta yet, but there are already some features in this first tier of on-device Apple Intelligence features that I’m genuinely enjoying.

One of them is the message summaries. In earlier betas, they just applied to the Messages and Apple Mail apps, but now you can extend them to third-party applications. (Although you can control this in the Notification settings on a per-app basis) This means that notifications on your phone are now summarized by Apple Intelligence on-device. With less than a week of testing, I already dig it.

The summaries are more concise and include more details than the previous notifications generated by the apps. There are no teasers here. It just tells me what I need. It’s AI-based, so it occasionally gets it wrong, but we’re still in early beta, and I will give it some grace. Also, this is the worst it will ever be, and it’s already pretty good.

The Big Question About Apple Intelligence

Since Apple unveiled its vision for artificial intelligence at WWDC, there have been numerous announcements and rumors about releasing different components of Apple Intelligence. The most recent update suggests that the all-new Siri won’t be available in beta until January of next year, with a general public release in Spring. While in recent years Apple has not hesitated to announce features that won’t ship with the initial release of a new operating system, Apple Intelligence takes it to a new level.

I don’t find fault with the delay, though. Even though it seems that Apple would have preferred to wait another year on these features, I believe that if the delay is necessary to get it right, it’s worth waiting for.

The big question about all the Apple Intelligence elements is not “When do we get them?” but “Do they work”? Significant market pressures forced Apple to explain its AI position, and now it has. I generally agree with the thinking behind Apple Intelligence, and if it works as promised, it will be very impressive.

In a few years, people will not remember exactly when the various Apple Intelligence components were released. However, they will remember whether or not they worked.

Beta 2 Updates and iPhone Mirroring

Yesterday, Apple released the second beta of the new operating systems, as announced at WWDC.

There are still no signs of Apple Intelligence. The most interesting addition is the ability to mirror your iPhone to your Mac via Continuity. I’ve just started testing it, and the feature feels solid enough for an early beta. Interestingly, the feature doesn’t work if your iPhone is active, which makes sense.

A Mac desktop with a gray background showing a Drafts window on the left side, and an iPhone visible using the new iPhone mirroring app available in the beta two version of macOS Sonoma.

There are some limitations. I tried to use Whisper Memos to record dictation onto the iPhone app via the Mac in mirroring mode, but it didn’t take. Nevertheless, keeping your iPhone on your Mac screen is excellent.

Private Cloud Compute

I watched the Apple WWDC 2024 keynote again, and one of the sections that went by pretty quickly was the reference to Private Cloud Compute, or PCC. For some of Apple’s AI initiative, they will need to send your data to the cloud. The explanation wasn’t clear about what sorts of factors come into play when necessary. Hopefully, they disclose more in the future. Regardless, Apple has built its own server farm using Apple silicon to do that processing. According to Craig Federighi, they will use the data, send back a response, and then cryptographically destroy the data after processing.

Theoretically, Apple will never be able to know what you did or asked for. This sounds like a tremendous amount of work, and I’m unaware of any other AI provider doing it. It’s also exactly the kind of thing I would like to see Apple do. The entire discussion of PCC was rather short in the keynote, but I expect Apple will disclose more as we get closer to seeing the Apple Intelligence betas.