• The Action Button on the iPhone is one of those features that looks simple on the surface, but once you customize it, it can become a powerful accessibility tool. For blind users especially, having instant access to an AI assistant can make everyday tasks faster, safer, and less frustrating.

    Instead of digging through apps, you can set the Action Button to launch your preferred AI using live speech. That means quick help with reading documents, identifying surroundings, checking products while shopping, or getting answers on the go.

    Here’s how I set mine up.

    How to Customize the Action Button

    Open Settings.

    Swipe to Action Button and double tap.

    Swipe through the available options. You’ll see choices like Silent Mode, Focus, Camera, Flashlight, Voice Memos, Music Recognition, Translate, Magnifier, Controls, Shortcuts, and Accessibility.

    Double tap the option you want to assign. For AI access, we’re going to choose Shortcuts.

    Using the Action Button for AI Apps

    After selecting Shortcuts, swipe right until you find All Shortcuts and double tap.

    You’ll now see a list of available shortcuts. This is where things get interesting. Many AI apps already include built-in shortcuts, including ChatGPT, Gemini, Aira, and others.

    For this example, we’re going to choose ChatGPT Live Voice Mode.

    Double tap the ChatGPT live voice shortcut. This allows the Action Button to launch ChatGPT directly into live speech mode.

    Once selected, your Action Button is now tied to that AI experience.

    If you don’t immediately see the shortcut you want, swipe until you find Show All Shortcuts and double tap. This will reveal all shortcuts stored in your folders.

    This is especially useful if, like me, you organize shortcuts by category. I have multiple AI shortcuts in one folder, including shortcuts for creating calendar events that I share with my better half, as well as different AI tools for different situations.

    Choosing the Right AI for You

    You’re not limited to ChatGPT. You can assign Gemini, Aira, or another AI app depending on what you rely on most.

    For me, ChatGPT live speech works best because it lets me turn on the camera, ask questions verbally, and get help with my surroundings, documents, mail, or quick research without touching the screen much.

    Pick the AI that fits your daily needs.

    Using the Action Button in Real Life

    Once everything is set up, press and hold the Action Button.

    Your chosen AI app launches instantly.

    This is incredibly helpful when you’re on the go. Shopping for groceries, checking clothing, reading paperwork, or navigating an unfamiliar environment becomes much easier when your AI assistant is one button press away.

    Instead of unlocking your phone, finding the app, and navigating menus, you’re immediately connected to live help.

    Final Thoughts

    This setup turns the Action Button into more than just a shortcut. It becomes a personal accessibility tool that adapts to your life.

    If you rely on AI for independence, information, or productivity, this is one of the simplest and most effective ways to make your iPhone work better for you.

    Try it out, experiment with different AI shortcuts, and see which setup fits your routine best.

  • Google Slides quietly added a feature called “Beautify this slide,” and it’s one of those updates that doesn’t sound exciting until you actually use it. This tool takes a basic, text-heavy slide and automatically redesigns it with layout, visuals, and structure that actually make sense. No design background required, no extra apps, and no subscription needed beyond a standard Google account.

    For anyone who creates presentations regularly, this is a real time-saver. Instead of spending hours adjusting fonts, spacing, and visuals, you can focus on the content and let the AI handle the presentation layer.

    What the “Beautify This Slide” Feature Actually Does

    When you’re inside Google Slides, you can select a slide and choose the “Beautify this slide” option. The AI analyzes your text and instantly generates a redesigned version of that slide. It adds visual hierarchy, images, icons, and spacing that match the topic of your content.

    What’s important is that it doesn’t lock you into anything. You can still edit text, swap images, resize elements, or undo the changes completely. Think of it as a smart starting point rather than a final design you’re forced to accept.

    Why This Matters for Productivity

    Presentation design is one of those tasks that eats time without you realizing it. You start adjusting one thing, then another, and suddenly an hour is gone. This feature collapses that entire process into seconds.

    If you’re a creator, educator, student, or someone running meetings, this helps you move faster without sacrificing quality. You get something that looks polished enough to present immediately, even if design isn’t your strength.

    My Take From an Accessibility Perspective

    From an accessibility and usability standpoint, this feature is interesting. For blind and low-vision users, presentations are often about structure more than visuals. What matters is that information is organized clearly and consistently.

    While the visuals themselves may not be accessible to everyone, the underlying benefit is still there. Cleaner layouts usually mean better reading order, clearer sections, and less clutter. That helps when slides are shared afterward or converted into other formats.

    I do think there’s room for improvement here. It would be powerful if Google allowed more control over contrast, font style, or layout density to better support different accessibility needs. Still, as a free tool, this is a strong step in the right direction.

    Is It Really Free?

    Yes, Google Slides itself is free with a Google account. The “Beautify this slide” feature appears to be part of Google’s ongoing AI rollout inside Workspace tools. There’s no separate payment required to use Slides, though access to certain AI features may depend on account type or rollout timing.

    The key point is that you can use this today without paying for a new app, subscription, or plugin.

    Why This Is Bigger Than Just Slides

    This feature signals something larger. Google is slowly baking AI into tools people already use instead of forcing them to learn new platforms. That’s a big deal for productivity and accessibility alike.

    When AI meets users where they already are, adoption becomes easier, workflows stay simple, and more people benefit without friction.

    If this is the direction Google continues to go, everyday tools like Slides, Docs, and Sheets could quietly become some of the most powerful AI assistants available — without the hype.

  • Ray-Ban Meta glasses just received version 21.0, and this is one of the most significant updates the platform has seen so far. This update brings major improvements for everyday use, content creation, fitness tracking, and AI interaction across both Gen 1 and Gen 2 glasses.

    One of the most useful additions is Find Device. You can now see the last connected location of your glasses inside the Meta AI app. If your glasses are misplaced, the app shows the address where they were last active, which can be extremely helpful.

    Meta also expanded video features with hyperlapse recording, slow-motion video capture, and adjustable stabilization settings. These tools give creators more flexibility depending on whether they want smoother footage or a wider field of view.

    For fitness-focused users, Athlete Intelligence connects the glasses to Apple Health, Google Health, Strava, and Garmin. Garmin users also gain automatic video capture during workouts and the ability to create custom run or bike workouts using voice commands.

    AI interaction has also improved. Conversations feel more natural, without needing to pause between responses. Music features now let you request songs based on your surroundings, and Conversation Focus enhances voices directly in front of you while filtering background noise.

    Overall, version 21.0 pushes the Ray-Ban Meta glasses closer to being a true everyday wearable. The update shows Meta continuing to invest heavily in making these glasses more useful beyond just capturing photos and videos.

    Click here to watch the video.

  • 5 Apps That Might Change How We Use AI

    So ChatGPT quietly rolled out what some people are calling an “Apps beta.”

    It’s not officially an app store yet, but it’s definitely starting to feel like one.

    The idea is simple. Instead of jumping between apps, you stay inside ChatGPT and connect services directly into the conversation. You can call them up using the at command, and ChatGPT can even suggest apps based on what you’re doing.

    Here are five apps that already stand out to me.

    Spotify and Apple Music let you describe a mood or activity and instantly play the right music without leaving the chat. DoorDash and Instacart take it a step further. If you’re talking recipes, ChatGPT can help you order ingredients on the spot. TripAdvisor ties into travel planning, letting you search and book while you’re already asking questions.

    What’s interesting isn’t just the apps themselves. It’s the direction this points to. If developers can build directly inside ChatGPT, this starts looking less like a chatbot and more like a platform.

    I’m curious how far this goes.

    Is this just a convenience feature, or are we looking at an early version of something that could eventually compete with traditional app stores?

    Drop a comment and let me know what you think, and which app you’d actually use.

  • While Microsoft was shaking up local AI agents, MBZUAI introduced something completely different: a world model called PAN. Instead of generating single video clips that instantly forget everything, PAN builds and maintains a continuous digital world that updates every time you give it a new instruction.

    Most video models wipe their memory the moment a clip ends. PAN doesn’t. If you tell it to turn left, then speed up, then pick up a block, each action continues from the last. It’s not producing disconnected visuals—it’s simulating cause and effect. That’s why researchers call it a world model rather than a video generator.

    The model uses Qwen2.5-VL for reasoning and a video generator adapted from Wan 2.1. Instead of letting visuals destroy consistency, PAN keeps reasoning in a stable internal space and then translates those states into video. That prevents objects from morphing or drifting during long sequences, something most video models completely fail at.

    A big part of PAN’s stability comes from its causal, chunk-based refinement system. The model only looks at past frames—not future ones—which forces it to respect continuity. They even add slight noise to prevent it from over-focusing on tiny pixel details and losing track of the scene’s big picture.

    Training this model was a massive project. MBZUAI used 960 Nvidia H200 GPUs, recaptioned thousands of videos to emphasize motion and cause-and-effect, and filtered out anything that wouldn’t help with real-world simulation. The payoff is huge: PAN scores 58.6% overall on action-simulation tasks, outperforming many commercial world models that avoid publishing their numbers.

    PAN even works as a planning tool. Plugged into an O3-style reasoning loop, it hits 56.1% accuracy on simulation tasks, making it strong enough to act as the “what happens if I do this?” module inside future AI agents.

    This is the direction the industry is moving toward—AI systems that can understand actions, predict consequences, and maintain stable worlds over time. PAN is one of the first open-source models to make that idea feel real.

  • Apple just released iOS 26.2, and while the update brings performance fixes and new features, the bigger story may be what’s happening behind the scenes. Several high-level Apple executives are leaving or transitioning roles, raising questions about Apple’s direction in AI, design, and long-term strategy.

    In this video, I break down what iOS 26.2 actually improves, why this update feels like a course correction, and how recent leadership changes could impact Apple’s future products and software. From AI strategy shifts to design changes and system stability, this is a grounded look at what matters — without the hype.

    If you care about Apple, accessibility, and how these changes affect real users, this one’s worth watching.

    Click here to watch the video

  • Holiday shopping looks very different when you’re blind — decorations move, mall layouts change, and walkways aren’t always where you expect them to be. In this video, I share how I actually navigate crowded stores using my cane, sound, and mental mapping, and why simple descriptions matter more than people realize. Watch the full video here and let me know what the holidays are like where you shop.

    Click here to watch video

  • The new Ray-Ban Meta 20.0 update brought more changes than the official release notes admitted, and as someone who uses these glasses every day as a blind creator, I wanted to break down what actually matters. This is especially important for anyone still on the Gen 1 model, because this update quietly showed just how far Meta is pushing users toward Gen 2.

    The biggest improvement for me is that slow motion and hyperlapse finally work on both Gen 1 and Gen 2. That matters because a feature like hyperlapse lets me walk around, capture the moment, and still keep listening to music or an audiobook since hyperlapse does not record audio. For workflow, that means I can stay focused and stay in motion without losing my rhythm. Slow motion being available on Gen 1 also brings new creative possibilities without upgrading hardware.

    But we have to be honest. Gen 1 users are starting to get left behind. The five minute recording limit still stays exclusive to Gen 2, and that affects creators who record longer clips or rely on the glasses for hands-free filming. When you create content the way I do, every limitation shows up in your workflow fast. This update also reminds us that Meta now has to support multiple devices at once, and Gen 1 is slowly getting less attention.

    The new app connections layout inside the Meta app is a small but useful improvement. As a blind creator, clarity and organization matter because I navigate everything through a screen reader. Having connected and non-connected apps separated, and seeing what each app is paired with, helps me stay in control of my setup. It also removes the frustration of hunting through settings to disconnect or re-connect something.

    The biggest disappointment is still the missing Conversation Focus mode. That feature was announced months ago, and it was supposed to help isolate the voice in front of you and reduce background noise. Features like that are not just upgrades for blind creators. They’re essential tools for independence. Whether you’re at an event, in a loud venue, or recording something on the go, having clearer audio could change the entire experience. The fact that this still has not arrived on either generation says a lot about how slow some of the accessibility-friendly features are rolling out.

    Overall, the 20.0 update gives Gen 1 owners a few exciting tools, but it also makes it clear that Meta is steering creators toward the newer hardware. As someone preparing courses for blind users and creators, updates like this are a reminder that we need to track features version by version, not assume both models will grow evenly. The good news is that hyperlapse and slow motion expand what’s possible on Gen 1. The bad news is that the gap between Gen 1 and Gen 2 is becoming more obvious every month.

  • Have you ever walked through a loud mall or packed store during the holidays and thought nothing of it? For someone who’s blind, those spaces sound completely different — and navigating them takes a whole mix of listening, patience, and awareness most people never think about.

    In today’s Unseen Adventures video, I break down how I navigate with echolocation, what throws me off, and what actually helps.

    If you’ve ever wondered how blind people move through the world, this one’s for you. Click here to watch

  • People always ask me why I use such a long cane — so I finally made a video breaking it down. I’m 5’7, but my cane is 58 inches… and there’s a real reason for that. Reaction time, safety, confidence — it all plays a part.

    I even went outside and showed how I use it in real streets so people can see the difference.

    Click here to watch