Apple Licensing Data for its AI Training

The New York Times reports Apple is in negotiations to license published materials for training their generative AI model. This shouldn’t be a surprise. A few years ago, when image processing was the big thing, everyone thought Apple would fall behind because they weren’t collecting all our images for data processing. Then I saw Craig Federighi explain how Apple could get pictures of mountains and that they didn’t need mine.

This is similar to how Machine Learning requires a data set to train. Again, Apple is looking to buy data as opposed to setting its AI loose on the Internet. I really wish I had a better idea about what Apple is thinking to do with AI.

A Different Take on Apple and AI

William Gallagher is a pretty clever guy, and I enjoyed his take on Apple and AI over at AppleInsider. Based on Apple’s latest paper, they seem (unsurprisingly) interested in looking for ways to run Large Language Models (LLMs) on memory-constrained local devices. In other words, AI without the cloud. We saw this a few years ago with image processing. Apple wants to have the tools while preserving user privacy. Just from speaking to Labs members in privacy-conscious businesses, I expect this will be very popular if it works.

Sam Altman’s Return to OpenAI

It was quite the week over at the OpenAI Office. I’m sure someone will write a book about it at some point. From the outside, it looked like another example of the conflicting priorities that always result when a nonprofit owns a for-profit company. Regardless, those priorities got sorted out this week.

My only other comment on this is the irony that OpenAI is the company making the thing that many fear will replace their jobs. Yet, when push came to shove, OpenAI’s biggest concern was keeping their humans, not their robots.

Is AI Apple’s Siri Moonshot?

The Information has an article by Wayne Ma reporting Apple is spending “millions of dollars a day” on Artificial Intelligence initiatives. The article is pay-walled, but The Verge summarizes it nicely.

Apple has multiple teams working on different AI initiatives throughout the company, including Large Language Models (LLMs), image generation, and multi-modal AI, which can recognize and produce “images or video as well as text”.

The Information article reports Apple’s Ajax GPT was trained on more than 200 billion parameters and is more potent than GPT 3.5.

I have a few points on this.

First, this should be no surprise.

I’m sure folks will start writing about how Apple is now desperately playing catch-up. However, I’ve seen no evidence that Apple got caught with its pants down on AI. They’ve been working on Artificial Intelligence for years. Apple’s head of AI, John Giannandrea, came from Google, and he’s been with Apple for years. You’d think that people would know by now that just because Apple doesn’t talk about things doesn’t mean they are not working on things.

Second, this should dovetail into Siri and Apple Automation.

If I were driving at Apple, I’d make the Siri, Shortcuts and AI teams all share the same workspace in Apple Park. Thus far, AI has been smoke and mirrors for most people. If Apple could implement it in a way that directly impacts our lives, people will notice.

Shortcuts with its Actions give them an easy way to pull this off. Example: You leave 20 minutes late for work. When you connect to CarPlay, Siri asks, “I see you are running late for work. Do you want me to text Tom?” That seems doable with an AI and Shortcuts. The trick would be for it to self-generate. It shouldn’t require me to already have a “I’m running late” shortcut. It should make it dynamically as needed. As reported by 9to5Mac, Apple wants to incorporate language models to generate automated tasks.

Similarly, this technology could result in a massive improvement to Siri if done right. Back in reality, however, Siri still fumbles simple requests routinely. There hasn’t been the kind of improvement that users (myself included) want. Could it be that all this behind-the-scenes AI research is Apple’s ultimate answer on improving Siri? I sure hope so.

My Transcription Workflow for the Obsidian Field Guide (MacSparky Labs)

In this video I demonstrate how I used two AI tools, MacWhisper and ChatGPT, to generate transcripts and SubRip text (SRT) files for the Obsidian Field Guide videos.…

This is a post for MacSparky Labs Level 3 (Early Access) and Level 2 (Backstage) Members only. Care to join? Or perhaps do you need to sign in?

Specific vs. General Artificial Intelligence

The most recent episode of the Ezra Klein podcast includes an interview with Google’s head of DeepMind, Demis Hassabis, whose AlphaFold project was able to use artificial intelligence to predict the shape of proteins essential for addressing numerous genetic diseases, drug development, and vaccines.

Before the AlphaFold project, human scientists, after decades of work, had solved around 150,000 proteins. Once AlphaFold got rolling, it solved 200 million protein shapes, nearly all proteins known, in about a year.

I enjoyed the interview because it focused on Artificial Intelligence to solve specific problems (like protein folds) instead of one all-knowing AI that can do anything. At some point in the future, a more generic AI will be useful, but for now, these smaller specific AI projects seem the best path. They can help us solve complex problems while at the same time being constrained to just those problems while we humans figure out the big-picture implications of artificial intelligence.

Is ChatGPT Really Artificial Intelligence?

Lately, I’ve been experimenting with some of these Large Language Model (LLM) artificial intelligence services, particularly Monkey. Several readers have taken issue with my categorization of ChatGPT Monkey as “artificial intelligence”. The reason, they argue, is that ChatGPT really is not an artificial intelligence system. It is a linguistic model looking at a massive amount of data and smashing words together without any understanding of what they actually mean. Technologically, it has more in common with the grammar checker in Microsoft Word than HAL from 2001: A Space Odyssey.

You can ask ChatGPT for the difference between apples and bananas, and it will give you a credible response, but under the covers, it has no idea what an apple or a banana actually is.

One reader wrote in to explain that her mother’s medical professional actually had the nerve to ask ChatGPT about medical dosages. ChatGPT’s understanding of what medicine does is about the same as its understanding of what a banana is: zilch.

While some may argue that ChatGPT is a form of artificial intelligence, I have to agree that there is a more compelling argument that it is not. Moreover, calling it artificial intelligence gives us barely evolved monkeys the impression that it actually is some sort of artificial intelligence that understands and can recommend medical dosages. That is bad.

So going forward, I will be referring to things like ChatGPT as an LLM, and not artificial intelligence. I would argue that you do the same. 

(I want to give particular thanks to reader Lisa, who first made the case to me on this point.)