Apple offers fresh developer betas for the second round of Intelligence features

Apple is already working on the next set of features for its much-anticipated foray into the artificial intelligence boom, even though the first batch will be made available to the public early next week.

Apple released developer betas of iOS 18.2, iPadOS 18.2, and macOS 15.2 on Wednesday. These versions include Apple Intelligence features that were previously exclusive to the company’s own advertising and product announcements: Prompts for Writing Tools, Visual Intelligence, increased English language support, ChatGPT compatibility, and three types of image production.

Three types of image production

For the first time, the public will have access to Apple’s suite of image-based generative AI tools, which includes Image Playground, Genmoji, and Image Wand. Apple has eschewed the creation of photorealistic images in favor of a few distinct styles that it refers to as “animation” and “illustration.” The company stated when it first unveiled these features at WWDC in June that they were meant to facilitate the creation of lighthearted and playful images that are shared among family and friends.

Depending on the user’s prompt, Genmoji’s custom-generated emoji will offer multiple possibilities. The resulting graphics can be transmitted as a sticker, inline, or even as a tapback. (As an example, one may request an emoji of a “rainbow-colored apple.” Additionally, it can generate emoji from the faces in your Photos library’s People area. The Mac does not currently support genmoji creation.

A simple image generator, Image Playground has some intriguing limitations. To begin, the function will present you with a selection of ideas, or you can simply type a description of the kind of image you’re looking for. Image Playground can create images based on persons in your photo library, just as Genmoji. Additionally, it is capable of generating connected pictures from individual photos. The photographs produced follow certain particular, non-photographic forms, such hand-drawn illustrations or Pixar-style animation.

Users can create a more detailed image from a crude drawing by using Image Wand. It operates by circling a sketch that requires an A.I. upgrade and choosing the new Image Wand tool from the Apple Pencil tools palette. Depending on the surrounding text, Image Wand can also be used to create images from entire pieces of cloth.

Image generation tools, of course, open the door to the possibility of producing potentially inappropriate content. Apple is working to mitigate this risk in a number of ways, such as restricting the kinds of content that the models are trained on and imposing guidelines on the kinds of prompts that are allowed. For instance, it will specifically weed out attempts to produce images that contain violence, nudity, or content that is protected by copyright. Apple is giving users the option to report images directly within the tool itself in the event that an unexpected or concerning outcome is produced, which is a risk with any model of this kind.

Additionally, third-party developers will get access to the Image Playground and Genmoji APIs, enabling them to include support for those features into their own applications. This is especially crucial for Genmoji since users’ personalized emojis won’t otherwise be supported by third-party chat programs.

Issue commands to Writing Tools

Additionally, the update includes additional text input and free-association flare that is commonly associated with huge language models. For instance, Writing Tools now offers a custom text input box, whereas in the initial wave of features, it mostly allowed you to press on various buttons to alter your content. You can tap to enter text to specify how you would like Apple Intelligence to alter your text when you pick some text and bring up Writing Tools. For instance, I could have written “make this funnier” after selecting this paragraph.

In addition to the developer beta, Apple is releasing an API for Writing Tools. This is significant because some programs, including some of the ones I use frequently, utilize their own unique text-editing controls, even though Writing Tools are available in all apps that use Apple’s standard text controls. All of the Writing Tools functionality will be available to those apps once they implement the Writing Tools API.

If you want it, here is ChatGPT

For the first time, ChatGPT connectivity is also part of this new wave of capabilities. This will contain the capability of passing Siri inquiries to ChatGPT, which would occur dynamically according to the nature of the inquiry. For instance, you might ask Siri to arrange your day’s activities in a different city. In addition to being asked to allow the ChatGPT integration when they first install the beta, users will be asked again whenever they submit a query. You can also choose to eliminate the per-query popup or stop that integration under Settings. You may occasionally receive extra requests to provide ChatGPT with particular types of personal information, such as if your query also uploads

According to Apple, your IP address is disguised so that distinct queries cannot be connected, and by default, requests transmitted to ChatGPT are not saved by the service or utilized for model training. You can choose to log into a ChatGPT account, which offers more reliable access to particular models and features, even though it is not necessary to use the feature. If not, ChatGPT will choose the model that best answers the question on its own.

If you have ever used ChatGPT for free, you are aware that there are some restrictions on the models you can use and the quantity of inquiries you can ask in a single session. It’s interesting to note that ChatGPT usage is limited for Apple Intelligence users; if you use it excessively, you’ll likely encounter usage restrictions. However, it’s unclear if Apple’s agreement with ChatGPT implies that iOS users will benefit more from those limitations than randos on the ChatGPT website. (You will be restricted to the limits on your ChatGPT account if you want to pay for it.)

Models of the iPhone 16 with visual intelligence

The Visual Intelligence feature, which was initially demonstrated at the iPhone 16 and iPhone 16 Pro models’ launch last month, will also be available to owners of those models in this beta. (To enable Visual Intelligence, press and hold the Camera Control button, then aim the camera and press the button once more.) In addition to translating language, scanning QR codes, reading text aloud, and more, Visual Intelligence then searches for information about what the camera is now viewing, such as the hours of a restaurant you are in front of or event details from a poster. Additionally, it has the option to use Google search and ChatGPT to learn more about the object it is examining.

Encouragement of further English dialects

Apple Intelligence only supported U.S. English when it first launched, but in the most recent developer betas, that support has been expanded to include other languages. Although it is currently only available in English, Apple Intelligence will be available to English speakers in Canada, the UK, Australia, New Zealand, and South Africa. (In addition to the English localities for Singapore and India, Apple also reports that support for a number of other languages, including Chinese, French, German, Italian, Japanese, Korean, Portuguese, Spanish, and Vietnamese, would be available by 2025.)

What comes next?

Apple is gathering input on how well its Apple Intelligence capabilities work as part of these developer betas. In addition to using the input to enhance its tools, the business intends to utilize it to determine when it might be ready to launch to a wider audience. We certainly get the impression that Apple is moving as cautiously as possible in this area while simultaneously launching into its artificial intelligence future. It is aware that AI-based tools will have peculiarities, which makes these beta cycles even more crucial for determining the ultimate product’s course.

Before them, there will undoubtedly be a lot more developer betas and then public betas.Later this year, two releases will be made available to the entire public. Additionally, there are still a number of Apple Intelligence features that have been announced but are still to come, most notably a number of crucial new Siri features including support for Personal Context and App Intents for in-app activities. Although Apple Intelligence is taking a new stride today, there is still a long way to go.

Komal Patil: