KAMPALA, UGANDA – The way we interact with our devices is undergoing a quiet—but radical—transformation.
In his latest newsletter, AI and marketing consultant Andrew Bolis highlights a wave of updates from Microsoft, Google, and Apple that reveal just how fast artificial intelligence is being embedded into the daily tools millions rely on. From the desktop operating systems that shape our workdays to the photo apps that hold our memories, big tech’s AI push is no longer about future potential. It’s here, live, and already changing how people use their phones, laptops, and cameras—often without even realizing it.
At the heart of this shift is Microsoft’s Copilot Vision, a feature now rolling out on Windows 11 that lets the AI assistant see what’s on your screen. More than just a chatbot, Copilot now functions as a kind of second brain for your computer—watching, interpreting, and offering help in real-time. It can analyze what you’re doing across multiple apps, suggest actions, walk you through complex settings, and even adjust your system based on natural language prompts.
Microsoft describes it as the most advanced integration of AI into Windows to date. Users can share two apps simultaneously with Copilot, allowing for contextual assistance across tasks—no more toggling between tabs and tutorials. There’s also a new “Show Me How” feature for step-by-step guidance, and early testers can now try full desktop sharing, essentially inviting the assistant to look over their shoulder.
Bolis notes this is part of Microsoft’s larger strategy to turn Windows into the first AI-native desktop environment, setting the stage to compete with Google’s Gemini Live and the soon-to-launch Apple Intelligence. “Copilot Vision isn’t just smart—it’s situationally aware,” Bolis writes. “That makes it a game-changer.”
But while Microsoft is doubling down on productivity and guidance, Google is moving fast to reimagine creativity.
The latest update to Google Photos lets users animate still pictures into mini videos and transform them into stylized versions that look like anime frames, comic panels, or 3D illustrations. The tech behind it—Google’s own Veo 2 and Imagen AI models—means that with a single tap, users can turn a selfie into a sketch or a sunset into a six-second animated clip.
The new tools are simple, even playful. Users can select animation styles like “Subtle Movements” or take a chance with “I’m Feeling Lucky.” The updated app also includes a new “Create” tab to centralize editing tools and streamline the experience. All AI-generated content is marked with SynthID watermarks, Google’s way of showing where AI starts and human editing ends.
What makes this rollout notable isn’t just the tech—it’s the reach. With over 1.5 billion users, Google is turning one of the world’s most popular photo apps into a powerful creative studio, one that doesn’t require any editing knowledge. For people who might never open Photoshop or pay for animation software, this brings those capabilities into their pocket.
“It’s a clever move,” Bolis notes. “People want to be creators, not just consumers. This makes that easier than ever—and keeps Google at the center of their digital lives.”
Meanwhile, Apple is taking a more measured, design-focused approach with the iOS 26 beta 4 release.
While it includes a series of visual refinements—like enhanced Liquid Glass effects and dynamic wallpapers that subtly shift throughout the day—it also brings back one of its most controversial features: AI-generated news summaries. The summaries return with new warnings that the condensed content may “change the meaning” of the original article, a nod to previous concerns about AI distortions.
Apple also added subtle UI enhancements, from deeper transparency effects in navigation bars to a new “Reduce Loud Sounds” option replacing “Late Night Mode.” Updates to CarPlay, camera icons, and call screening settings point to a broader effort: fine-tuning AI and design without overwhelming users or compromising trust.
“Apple’s changes feel less dramatic but more deliberate,” Bolis explains. “They’re responding to criticism while quietly making iOS more dynamic and more aware—just not in a way that feels invasive.”
What connects all three companies is a shared vision: AI shouldn’t be something users seek out—it should be something that seamlessly appears when they need it most. Whether it’s a digital assistant helping you configure Bluetooth settings, a photo app remixing vacation pictures into moving art, or a phone summarizing breaking news with a note of caution, this latest wave of updates suggests that AI’s new frontier isn’t found in research labs. It’s unfolding on your home screen.
These aren’t just software updates—they’re signposts. Signs that the age of passive apps is ending. And in its place, we’re entering a world where your devices won’t just respond to you—they’ll anticipate you.
Whether that’s empowering or unnerving depends on who’s watching—and what they choose to do with what they see.