The most recent episode of the Ezra Klein podcast includes an interview with Google’s head of DeepMind, Demis Hassabis, whose AlphaFold project was able to use artificial intelligence to predict the shape of proteins essential for addressing numerous genetic diseases, drug development, and vaccines.
Before the AlphaFold project, human scientists, after decades of work, had solved around 150,000 proteins. Once AlphaFold got rolling, it solved 200 million protein shapes, nearly all proteins known, in about a year.
I enjoyed the interview because it focused on Artificial Intelligence to solve specific problems (like protein folds) instead of one all-knowing AI that can do anything. At some point in the future, a more generic AI will be useful, but for now, these smaller specific AI projects seem the best path. They can help us solve complex problems while at the same time being constrained to just those problems while we humans figure out the big-picture implications of artificial intelligence.
You may have seen the news that the actor’s union is now on strike. This affects some of our friends and many more of my daughter’s friends since she is currently at UCLA in the Theater and Film School…
Lately, I’ve been experimenting with some of these Large Language Model (LLM) artificial intelligence services, particularly Monkey. Several readers have taken issue with my categorization of ChatGPT Monkey as “artificial intelligence”. The reason, they argue, is that ChatGPT really is not an artificial intelligence system. It is a linguistic model looking at a massive amount of data and smashing words together without any understanding of what they actually mean. Technologically, it has more in common with the grammar checker in Microsoft Word than HAL from 2001: A Space Odyssey.
You can ask ChatGPT for the difference between apples and bananas, and it will give you a credible response, but under the covers, it has no idea what an apple or a banana actually is.
One reader wrote in to explain that her mother’s medical professional actually had the nerve to ask ChatGPT about medical dosages. ChatGPT’s understanding of what medicine does is about the same as its understanding of what a banana is: zilch.
While some may argue that ChatGPT is a form of artificial intelligence, I have to agree that there is a more compelling argument that it is not. Moreover, calling it artificial intelligence gives us barely evolved monkeys the impression that it actually is some sort of artificial intelligence that understands and can recommend medical dosages. That is bad.
So going forward, I will be referring to things like ChatGPT as an LLM, and not artificial intelligence. I would argue that you do the same.
(I want to give particular thanks to reader Lisa, who first made the case to me on this point.)
A recent update to Grammarly adds even more artificial intelligence: it’s called GrammarlyGO. I say “more AI” because Grammarly has always been an AI-based grammar-checking service. (There have never been humans there proofreading your work.)
GrammarlyGO brings it up a notch with the ability to adjust your voice and bring in other AI-based suggested checks. This feature lands for me as a valuable form of AI. Not to write for me but make my own words better. Grammarly incorporating AI like this makes total sense.
I’ve been spending some time experimenting with Google Bard, Google’s competitor to ChatGPT. It got a few things right and a few things wrong, but this is an easy platform if you want to experiment with large language model artificial intelligence… This is a post for MacSparky Labs Level 3 (Early Access) and Level 2 (Backstage) Members only. Care to join? Or perhaps do you need to sign in?
I read this post by John Gruber, and I couldn’t agree more about the shenanigans that will come from AI-generated deepfakes. The computers are so good at duplicating your voice at this point that a determined jackass could “produce” a tape of you saying anything. Conversely, an insolent jackass will deny an actual recording of him and claim it is a deepfake. Down is up. Up is down.
I don’t know that we’ll ever have “smoking gun” audio again. It’s just a question of time before that is true for video, too. The bad guys are certainly going to use this to further polarize us. Be warned.
Last week Microsoft gave an impressive presentation demonstrating the incorporation of artificial intelligence into their productivity apps. You can have it summarize and analyze data in Excel, write better documents in Word, and even summarize email in Outlook.
Moreover, it had less of that wild west feel we are seeing in most of the artificial intelligence features added to existing apps. This was clearly thought out. It’s worth watching the presentation even if you don’t use Microsoft software.
I really think this is a step in the right direction. What I would ultimately like from artificial intelligence is for it to help me get my work done better and faster. So much of modern technology seems to get in the way of serious work, rather than assist it. If you’ve ever watched any of the Iron Man movies, Tony Stark always had Jarvis working in the background for him, handling little things so Tony could work on the big things. I want Jarvis.
Just think how much easier your life could be, if you had a digital assistant that could do things for you like:
Manage calendars and schedule appointments
Send and respond to emails
Set reminders and alarms
Make reservations and appointments
Seeing these initial steps from Microsoft gives me hope that Jarvis may show up sooner than I thought.
I enjoyed this article from Dr. Drang about the robot-created AppleScript. I think AppleScript will be one of the most difficult languages for AI models to write because it was created to make it more human-readable, and that makes it quirky.
The other thing about AppleScript that will likely trip up the AI models (it certainly trips me up) is the modular nature of the language. Every app that implements AppleScript uses its own dictionary calls. From one app to another, these dictionaries vary greatly, and every script involving a new app requires a bit of spelunking.
Good luck with AppleScript, Robots, you’ll need it.
It seems clear that in tech circles, 2023 will become the year of AI. Well, artificial intelligence used for creating images and writing texts now been around for several years, and it is this year that it has entered the mainstream.
One of the ways you see this is the increasing inclusion of artificial intelligence as an app feature. The first time I saw this was in Craft. I use Craft for managing my team, and they quickly adopted artificial intelligence as an additional feature in the application. Anywhere in a Craft document, I can hit command, return, and into a prompt and get some auto robot – create a text. I don’t find it particularly useful (yet). Still, it’s clear that in the future, as artificial intelligence gets better, this will be something we expect anywhere we see a cursor.
But it goes beyond writing applications. We are already seeing it in applications that you would not naturally think of as a destination for an artificial intelligence engine. A few weeks ago, Raycast announced they are adding artificial intelligence to their keyboard launcher. It’s a good idea. It allows you to generate AI text anywhere and then paste it somewhere else on your Mac.
As to images, Pixelmator Pro has been taking advantage of artificial intelligence for years. It can do all sorts of interesting tricks to your images, using AI and making it easier for the user to get power features without power knowledge. For me, this is one of the best implementations of AI because I am not an expert with image manipulation applications. The application helps me bridge the gap.
One of my favorite implementation of it has been at SweetProcess.com. This is a web-based service that lets you document processes for your team. They have implemented AI into their engine so you can generate a new employee email or create a list of employee processes using artificial intelligence. Seeing this in action reminded me that artificial intelligence will be everywhere in the not-too-distant future.
My point is, that AI is showing up in apps and services everywhere.
My use of artificial intelligence is more helpful at this point to generate ideas than actual text. As an experiment, I was working on a Newsletter for the MacSparky Labs this morning. I asked it to generate text about the new rumored MacBook Air 15-inch. None of the generated text was usable. It read like a book summary by a clever person who’d only read the dust jacket. But when I asked artificial intelligence to come up with some names for this article, it did a pretty good job (although I didn’t pick any).
Regardless, you should expect more of your favorite apps to adopt some form of artificial intelligence. And when they do, have an open mind about it, and figure out where it can help you and where it falls short. Now that the snowball I started rolling, I’m eager to see how big it gets.
There’s a Mac and mobile application, Poe, giving you an easy way to kick the tires on artificial intelligence. I’ve been playing with it for the last day, and I recommend it, particularly if you’ve never tried this kind of thing before.
The application is as simple as download, open, and start talking to it. It’ll give you a good idea about the state of conversational artificial intelligence. It is easy to make fun of this stuff; AI gets a lot wrong. But it’s constantly learning. Like it or not, this stuff is coming, and it’s time to wrap our heads around that.
While we are not at the point of robot overlords just yet, we are getting to the point of helpful robots.