Hey {{First name|there}}! It’s Aaron.
AI used to help you think. Now it wants to do the work for you.
Schedule meetings. Draft emails. Build presentations.
Microsoft calls it the next phase of productivity.
But here’s the real question: When AI starts executing tasks, who’s actually in control?
Let’s break it down.
📌TL;DR
Copilot Cowork: Microsoft wants AI to execute work, not just answer questions — but delegation only works if you trust AI’s judgment.
Gemini Workspace: Google is turning Docs, Sheets, and Drive into an AI layer that can draft, analyze, and organize your work automatically.
Photoshop AI: Adobe’s new prompt-based assistant lets creators edit images with simple instructions instead of complex tools.
More AI news…
Estimated reading time: 5 minutes.

CATCH OF THE DAY
Microsoft Wants You to Delegate to AI.
But Can AI Actually Decide?

Source: Fusionsol
Microsoft just introduced Copilot Cowork, an AI that schedules meetings, drafts emails, builds presentations, and compiles research — all from one prompt.
Microsoft calls this “AI execution.” Instead of asking AI for help, you delegate work.
Sounds productive. But here’s the problem Microsoft doesn’t mention.
Delegation requires trust in decision-making. AI doesn’t make decisions, rather, it predicts patterns.
From answering questions to running workflows
Most AI tools today follow a simple loop.
You ask a question. The AI generates an answer. You take it from there.
Copilot Cowork extends that loop.
You describe an outcome. The AI builds a plan.
You approve it. Then it executes tasks across Microsoft 365.
That might include rescheduling meetings, compiling research, generating reports, or preparing briefs.
The system pulls context from emails, files, spreadsheets, and meetings to assemble deliverables automatically.
In theory, that removes busywork. In practice, it introduces a new dependency.
Because delegation only works if the AI understands what actually matters.
Microsoft doesn’t own the AI
Here’s what Microsoft didn’t emphasize.
Copilot Cowork runs on Claude from Anthropic. Microsoft isn’t building the intelligence. It’s licensing it.
That makes Microsoft the platform layer, not the model provider. Which raises an interesting question.
If Microsoft is packaging Claude inside Microsoft 365, why pay Microsoft for it?
Claude Pro costs about $20 per month. Microsoft 365 with Copilot typically costs significantly more depending on the plan.
For enterprise teams, the integration may justify the price.
For solo creators, it might not.
The real trade-off creators should consider
Microsoft’s pitch is simple. Tell the AI what outcome you want. Approve the plan. Let it execute.
But that assumes the AI understands intent.
And that’s where friction appears.
AI can schedule meetings… but it doesn’t know which ones are politically sensitive.
AI can generate research… but it doesn’t know which sources your audience trusts.
AI can build presentations… but it doesn’t know what actually persuades people.
So delegation becomes something slightly different.
You’re not removing work. You’re shifting into quality control, reviewing decisions the AI makes on your behalf.
Sometimes that saves time.
Other times it means debugging software logic.
Should creators actually use this?
Copilot Cowork makes the most sense if you’re already locked into Microsoft 365 for team workflows.
If your work lives inside Outlook, Teams, and Excel, the integration could be useful.
But for many solo creators, it may be unnecessary.
Tools like Claude, ChatGPT, and Notion AI already handle research, drafting, and ideation without enterprise overhead.
In other words, the value of Copilot Cowork depends less on the AI itself and more on how committed you are to Microsoft’s ecosystem.
The Final Byte
Microsoft wants “delegation” to sound like the future of productivity.
But delegation requires judgment, not just task execution.
AI can complete tasks.
It can’t decide what matters.
So you’re not eliminating work. You’re becoming the manager of software that’s guessing at your priorities.
For some workflows, that’s a huge productivity boost. For others, it’s just micromanaging a machine.
The real question isn’t whether AI can delegate work.
It’s whether you trust it to make the calls that used to be yours.
See you in the next one,


BYTE-SIZED BUZZ
Here’s a quick roundup of what’s making waves in the AI world this week.
Google rolled out deeper Gemini integration across Docs, Sheets, Slides, and Drive. The AI can now draft documents, generate spreadsheets, match writing styles, and answer questions using context pulled from emails and files in your Workspace.
The Big Deal: Google is quietly turning Workspace into an AI assistant that understands your entire knowledge base — making everyday tools like Docs and Sheets far more powerful for creators and teams.
Perplexity launched Personal Computer, a Mac mini that runs AI agents 24/7 with local access to your files. Think: continuous research monitoring, overnight content processing, or automated workflow tasks without cloud dependencies.
The Big Deal: For creators worried about cloud AI access or recurring costs, dedicated local AI hardware offers an alternative—but at the price of buying and maintaining separate equipment.
Reports suggest OpenAI is planning to integrate its Sora video generator directly into ChatGPT instead of keeping it as a separate tool. The move would let users create videos inside the same interface used for writing and image generation.
The Big Deal: If Sora lands inside ChatGPT, the platform could evolve into an all-in-one creation studio for text, images, and video — dramatically lowering the barrier for creators to produce multimedia content.
Adobe launched a beta AI assistant for Photoshop that allows users to edit images using simple natural-language prompts. Tasks like removing objects, changing lighting, or adjusting colors can now be done by describing the change instead of manually editing.
The Big Deal: Image editing is shifting from technical skill to creative direction — meaning creators can focus more on ideas while AI handles the heavy lifting.
Cybercriminals are using AI to write convincing phishing emails and fake identities. For creators managing brand collaborations or client relationships via email, AI-generated scams are getting harder to spot.
The Big Deal: As AI makes phishing more sophisticated, creators need stronger verification habits—double-checking sender details, using video calls for sensitive requests, and treating unexpected payment/login requests with suspicion.
WEEKLY CREATOR LOADOUT 🐾
Descript*: Edit videos and podcasts like a document with AI transcription, filler-word removal, and studio-quality audio tools.
Claude (Anthropic): Generate insights, explanations, and interactive charts directly in chat for research, writing, and teaching.
Uni-1 (Luma): Create multimodal content by generating visuals and text together for storytelling and creative workflows.
TADA (Hume AI): Produce expressive AI voiceovers for videos, podcasts, and narration with natural speech generation.
GoPost: Generate viral tweets, blog posts, and social media content to quickly repurpose creator ideas into posts.
*Affiliate Disclaimer: If you sign up for a paid subscription through my affiliate link, I earn a small commission—think of it as a high-five ✋🏼that helps keep this content rolling.
THE GUIDEBOOK
New to AI tools?
Check out past tutorials, tool reviews, and creator workflows—all curated to help you get started faster (and smarter).
SUGGESTION BOX
What'd you think of this email?

BEFORE YOU GO
I hope you found value in today’s read. If you enjoy the content and want to support me, consider checking out today’s sponsor or buy me a coffee. It helps me keep creating great content for you.
New to AI?
Kickstart your journey with…
ICYMI
Check out my previous posts here



