
Nearly 15 years in, Apple’s voice assistant is about to learn something its rivals figured out years ago.
Apple is testing a Siri upgrade that would let the assistant handle multiple requests in a single query. According to a Bloomberg report by Mark Gurman published March 31, the feature is being developed for iOS 27, iPadOS 27, and macOS 27, all expected to ship later this year.
The change sounds minor. It is not.
Right now, if you ask Siri to check the weather and set a reminder in the same sentence, it fumbles. You have to break everything into individual requests, re-activate the assistant between each one, and hope it understands what you meant. The new feature would let you say something like “check the weather, create a calendar appointment, and send a message” and have Siri parse and execute all three without any hand-holding.
Siri can currently answer follow-up questions without being re-triggered, but those still need to come as separate, sequential requests. That distinction matters more than it sounds when you are mid-task.
This Is Not a Minor Bug Fix
There is no gentle way to put this: what Apple is testing right now is something Amazon’s Alexa has been doing since 2019. ChatGPT, Gemini, and Perplexity handle compound queries as a baseline expectation. The fact that Siri is only now getting there, and is still months away from a public release, says something real about how far behind Apple fell during the first wave of the AI boom.
Apple’s initial Apple Intelligence rollout in 2024 landed with little excitement. Some features were delayed, others arrived half-finished, and the revamped Siri that was promised at WWDC 2024 never fully materialized. A shareholder lawsuit followed, citing misleading claims about the company’s AI readiness.
The multi-command feature is part of a broader Siri overhaul that Apple plans to preview at WWDC 2026, which opens June 8.
What Is Actually Changing
Beyond multi-command support, the new Siri is being rebuilt on a different foundation. Apple has reportedly brought in Google’s Gemini technology as the core model powering the revamped assistant. That is a significant shift for a company that has historically kept its AI stack in-house.
The rebuilt Siri would also finally deliver features Apple promised two years ago: personal context awareness, the ability to read and react to what is on screen, and the ability to take actions across hundreds of apps. Siri would also gain access to web information, potentially through a feature Apple is internally calling World Knowledge Answers, giving it the ability to summarize current information rather than routing users to a browser.
There is also a reported plan to open Siri beyond ChatGPT, which is currently the only third-party AI it routes to. Under iOS 27, users may be able to direct queries to Gemini, Claude, or other AI assistants installed through the App Store, with Siri acting more like a traffic controller than a standalone brain.
Apple is also testing new interface designs, including placing Siri in the Dynamic Island at the top of the screen and expanding it into a translucent panel once results are ready.
A September Target, With Caveats
Apple is aiming to have the new Siri ready by September, but Gurman’s report notes that some features may carry a “Preview” label at launch, the same approach Apple used to signal that early Apple Intelligence features were still in progress. Translation: what ships in the fall may not be the complete picture.
If that timeline holds, it will have been over two years between Apple’s initial Siri AI announcement and a fully working version reaching users. That is a long time in an industry that has moved quickly.
Whether Apple closes the gap at WWDC or manages another year of promising more than it ships is the real question. The multi-command feature is a step in the right direction. Whether it is part of a coherent product or another preview placeholder is something we will know in June.







and then