Don’t Underestimate Apple’s Shot at On-Device Medical AI

There’s a rumor that Apple is working on an on-device medical AI. The idea is that your iPhone or Apple Watch could use its onboard silicon to privately analyze your health data and offer recommendations, without sending that sensitive information to the cloud.

The general vibe I’m seeing in response to this rumor is justified skepticism. Plenty of folks out there think there’s no way Apple can pull this off, but I think this is exactly the kind of thing they should be doing. This idea presents an opportunity for Apple.

Apple has been steadily building up its health tech for years. With features like Atrial fibrillation (AFib) detectionECG, and Fall Detection, they’ve proven they can deliver meaningful health tools. And they’ve done it with an eye toward user privacy and accessible design.

Now, imagine layering a personalized AI model on top of that foundation — something smart enough to notice patterns in your vitals, flag potential concerns, or even offer preventative guidance. And because Apple controls the hardware, they could run that AI model entirely on-device. That means your health data stays private, living only on your phone or watch, not bouncing around in the cloud.

Apple’s unique position here — owning both the hardware and the operating system — gives them access to a depth of personal health data that no off-the-shelf Large Language Model could ever touch. Combine that with their Neural Engine and you have a real opportunity to do something both powerful and private.

This also feels like a moment for Apple to make a statement with “Apple Intelligence.” So far, Apple’s AI initiative has been underwhelming and disappointing. This could be a way for them to reset expectations with something carefully designed, respectful of privacy, and genuinely useful.

Of course, this only works if they get it right. Rushing something half-baked out the door won’t cut it, especially when people’s health (and Apple’s AI reputation) is at stake. But if they take their time and nail the execution, this could be a defining moment for Apple’s AI efforts and one more key feature that saves lives.

I hope the rumor’s true and that Apple gives this the time and resources it deserves. It could be something special.

AI Job Displacement Is Already Here

Tobi Lütke, CEO of Shopify, recently posted a letter to employees announcing a reduction in internal meetings and an AI restructuring initiative. Buried in that announcement was a sentence that hit like a cold splash of water:

“Before asking for more Headcount and resources, teams must demonstrate why they cannot get what they want done using AI.”

There it is: the quiet part, said out loud. AI job displacement isn’t some future scenario. It’s already happening at one of the most tech-forward companies in the world.

The disruption is upon us. AI will bring benefits, efficiencies, and entirely new opportunities. But it will also come with real costs. One of those is job loss, and as Shopify’s CEO makes clear, that phase has already begun.

Chat GPT Studio Ghibli Art

If you’ve been paying any attention to social media lately, you’ve probably noticed how much better AI multi-modal art has gotten. The zeitgeist has definitely latched onto this — mainly to generate Studio Ghibli-style art of themselves.

It’s impressive and once again raises the big questions about AI and art. Artists spend years honing their craft. Now, with these new tools, anyone is just a prompt or two away from generating convincing images of themselves and their friends. How are we supposed to feel about that?

I’m still working through my own thoughts on AI and intellectual property, but one thing’s for sure: this toothpaste isn’t going back in the tube.

As an example, I had the new ChatGPT engine generate some drawings for a recent Productivity Field Guide webinar I did on Habits being a useful tool for becoming your best self. I explained to ChatGPT that I wanted to illustrate how habits, once ingrained, become part of your identity. It made this image. Remarkable.

The Sparky Language Model (SLM)

I’ve been thinking a lot lately about artificial intelligence. There are a lot of good uses for it, but the one everyone talks about is writing. People are transfixed by its ability to write college-level essays. As AI technology becomes increasingly sophisticated, more and more people are turning to it for help with everything from drafting emails to generating content for the internet. This repels me. While the efficiency of these tools can’t be denied, I’ve decided to take a different path for anything published under my name, and I want to share why.

Recently, I attended a friend’s wedding. One of the attendees made a moving speech. The speech was so good that I complimented the speaker later, who explained that ChatGPT wrote the speech and they had just read it. This revelation left me profoundly unsettled. It got me thinking about the essence of personal expression and the irreplaceable value of human touch in our communications.

Writing is more than just a means to convey information; it’s a way to connect on a deeply personal level. Whether celebrating a milestone with loved ones or sharing insights in this newsletter, these moments are opportunities to express our unique perspectives and emotions. When we delegate this task to an AI robot, no matter how sophisticated, we lose a piece of that human connection. It feels like a form of erasure — a diminution of our individuality and the personal stamp we leave on our work and the world.

For all MacSparky content, I’ve always aimed to keep things personal. The thoughts, tips, and stories I share are mine. They are crafted from my experiences, not generated from a dataset. While I use technology extensively to enhance productivity and creativity, the words under my name are always my own. I believe this authenticity is something you can’t replicate with algorithms.

Even if we get to a point where the computers do an objectively better job of writing than I can (which can be expected with more time), I still have no interest in it. I am not interested in a better product that is not my product.

I understand the appeal of using AI to lighten our loads; technology is a powerful tool. But there are boundaries that we should navigate thoughtfully. For me, personal expression is one such boundary.

When it comes to writing words, I use an alternative to a LLM (Large Language Model) and I call it the SLM, the Sparky Language Model.

Perplexity Pages

My experiments with Perplexity continue. This alternate search app takes a different approach to getting answers from the Internet. Rather than giving you a list of links to read, it reads the Internet and tries to give you an answer with footnotes going back to the links it reads. I think it’s a good idea, and Perplexity was early to this game. Google is now following suit to less effect, but I’m sure they’ll continue to work on it.

I recently got an email from Perplexity about a new feature called Perplexity Pages, where you can give it a prompt, and it will build a web page about a subject of interest to you. Just as an experiment, I had it create a page on woodworking hand planes. I fed it a few headings, and then it generated this page. The page uses the Perplexity method of giving you information with footnotes to the websites it’s reading. I fed it a few additional topics, and it generated more content. Then, I pressed “publish” with no further edits. The whole experiment took me five minutes to create.

The speed at which these web pages can be created is both impressive and, in a way, unsettling. If we can generate web pages this quickly, it’s only a matter of time before we face significant challenges in distinguishing reliable information from the vast sea of content on the Internet. In any case, I invite you to explore my five-minute hand plane website.

On Apple Getting AI Help

Bloomberg reports that Apple is in talks with Google (and possibly OpenAI) about a deal to run iPhone AI features through these third-party providers. It’s all sketchy at this point but it doesn’t seem out of the realm of possibility.

Capacity is one of Apple’s big problems in terms of AI. There are just so many iPhones out in the wild that if you add an AI feature requiring any cloud processing, they would have to have massive server capacity to keep up with it.

That’s just one more reason why the on-device model makes sense. However, if there was something they did want to do on the server or make available to their users to be done on a server, they’re going to need help. So the real question would be whether Apple makes that deal or just does AI processing on device.

Transcripts in Apple Podcasts

With the iOS 17.4 update, the Podcasts app from Apple now has the ability to create transcripts of podcasts. This is great news. For years, people have asked me to add transcripts to the Mac Power Users and my other shows, but the problem has always been that it is cost prohibitive. With the explosion of artificial intelligence over the last year or two, that is no longer the case. And not only that, it’s built-in to the app, so we don’t even need to produce it ourselves.

iPad in landscape mode showing the Podcasts app from Apple. The episode shown is from the Mac Power Users podcast, entitled “I Got to Be the Hero.” You can see the artwork and the play controls on the left, and the new live transcription feature on the right, with some text highlighted at the top.

A couple nice features is that the transcript is searchable and tapping on an area of the transcript jumps the audio to that point.

This is a really nice update to Podcasts. Is it going to be enough to pull me away from Overcast? Probably not. But I’m at least going to take a serious look.

Apple Licensing Data for its AI Training

The New York Times reports Apple is in negotiations to license published materials for training their generative AI model. This shouldn’t be a surprise. A few years ago, when image processing was the big thing, everyone thought Apple would fall behind because they weren’t collecting all our images for data processing. Then I saw Craig Federighi explain how Apple could get pictures of mountains and that they didn’t need mine.

This is similar to how Machine Learning requires a data set to train. Again, Apple is looking to buy data as opposed to setting its AI loose on the Internet. I really wish I had a better idea about what Apple is thinking to do with AI.

A Different Take on Apple and AI

William Gallagher is a pretty clever guy, and I enjoyed his take on Apple and AI over at AppleInsider. Based on Apple’s latest paper, they seem (unsurprisingly) interested in looking for ways to run Large Language Models (LLMs) on memory-constrained local devices. In other words, AI without the cloud. We saw this a few years ago with image processing. Apple wants to have the tools while preserving user privacy. Just from speaking to Labs members in privacy-conscious businesses, I expect this will be very popular if it works.