There are a lot of things AI is (getting) good at. And there are a lot of AI-based solutions available. Sometimes you can miss the forest for the trees, and many times, all this AI stuff feels overwhelming.
Note that we’re looking into when to AI: we’ll use a few prioritization frameworks from product and talk more about how to navigate and make a choice when to use AI versus how to use AI.
The Cognitive Overhead of ‘AI-Everything’

We’re (over)promised and told AI can do anything. But if you try using AI for everything, for the sake of using AI, it will likely (and ironically) create more work and friction you need to deal with. Finding the right balance is about finding the point where AI actually gives you time back rather than just another tab to manage. A ‘Minimum Viable Solution’ – the point where AI actually gives you time and value back, with the least possible effort, time, focus, and friction.
Here are a few prioritization frameworks from product management that can be helpful as you try to find your ideal ‘Minimum Viable Solution’.
1. The Eisenhower Matrix

The Eisenhower Matrix is a famous 2×2 square dividing the work/life items by urgency and importance. Originally, I learned about this method back at college, and it actually comes from personal development/self-help, not from product management. However, it’s been useful both in my work as a product person, and my personal life.
It’s a simple, yet powerful framework. The work/life items get divided into:
- Important, but not urgent
- Important, and urgent
- Not important, and not urgent
- Not important, but urgent
Urgent and important: These tasks require your immediate attention and have a significant impact on your goals and priorities. Some examples include important deadlines, high-priority emergencies, important exams, or important meetings.
Important but not urgent: These tasks are essential for achieving your long-term goals but do not require immediate attention. Examples would be strategic planning, long-term projects, personal development, and relationship-building.
Urgent but not important: These tasks demand your immediate attention but they have little to no impact on your long-term goals. Examples are interruptions, unimportant calls, messages, and meetings, or minor issues.
Neither urgent nor important: These tasks don’t require immediate attention and have little to no impact on your long-term goals. Examples include time-wasting activities and trivial tasks.
What is the cost of the AI being wrong?
An “Important/Urgent” task is often the last thing you should AI if you don’t have a verification loop.
Without understanding both the importance and urgency of our options, we can feel overwhelmed; continuously neglect important, but not urgent items; and overall ‘lose the compass’.
There will always be fires and emergencies, distractions, and time-wasting stuff, but the awareness of where different items fall is the first step towards more balance. Note the word balance, as we also need some time-wasters, and it’s ok to cut yourself some slack and wind down a bit.
In practice
My approach here is to slowly build processes related to all areas. I know that it’s a bit contrary to ‘don’t AI everything’, but there’s little that’s not possible nowadays. Initially, I’m focusing on removing everything not important so I don’t waste time. (It’s also a limited downside/low risk if something goes wrong.) It’s not reflected within the matrix, but I also look at frequency (the more frequent gets automated/delegated first). With important items, I do try to automate/delegate to AI, but have more points where I verify more closely than related to not important.
2. Value vs. Effort

A very straightforward framework is looking into what you get and what you give, value vs. effort.
Beyond ‘just’ time used and time saved, you can play with priority and importance, but without overengineering, it gets as simple as looking into time.
Value of AI Solution = (Frequency × Time Saved) − Setup Friction – (Frequency x Execution Friction)
I have daily journal entries covering tasks planned with their done status, mood, ‘plus, minus, next’, notes etc. Copy-pasting and editing daily takes time, and this was one of the first candidates for delegating to AI. Now I have a simple command ‘/daily’ for Claude Code, and it will create my daily entry, add unfinished tasks from the previous day, and prepare the template for me to fill in. It takes me less than 2 seconds, and it’s mostly frictionless.
Note that most people fail here because they ignore Execution Friction, the daily ‘tax’ of using a flow. If your AI assistant requires a perfect 10-minute prompt every morning, the system will eventually collapse under its own weight.
The (hidden) maintenance cost
Beyond setup and execution, AI tools (like Claude Code) are not “set it and forget it.” They require prompt tuning and troubleshooting. Once you’re more comfortable with the formula, don’t forget to build on, and account for the ‘degradation’ or ‘maintenance’ variable.
P.S. Read about how I use AI for journaling in Episode 02: Journaling on Steroids.
3. (R)ICE

RICE and ICE are two quite similar frameworks, with RICE coming from ‘AI customer service company’ Intercom.
Rice stands for Reach, Impact, Confidence, Effort and you can read about the framework in detail on Intercom blog.
Ice stands for Impact, Confidence, Ease.
- Impact refers to the potential positive impact of the feature
- Confidence refers to the level of confidence that the Impact was predicted accurately
- Ease reflects effort to build the flow (the higher the score, the lower the effort / easier it is)
‘Confidence’ usually refers to market data. In our ‘AI context’, it should refer to the reliability of the output.
Lastly, since you’re managing your own time and life, ICE may make more sense at first, but ‘reach’ doesn’t have to do with the number of customers – you could, for example, measure/quantify how many areas of your life a flow will improve.
4. Accounting for Quality
Accounting for quality
The frameworks we mentioned (Eisenhower, (R)ICE, Value/Effort) still focus almost entirely on time and impact. They don’t explicitly account for the quality of the output. In some cases, AI can provide a ‘low-effort’ solution for a ‘high impact’ item, but also ‘lower quality’ than a human version. The mentioned frameworks don’t yet help a user decide if that quality trade-off is acceptable.
AI Is a Good ‘Gig-Worker’, But Even Better Architect of Context
We’ve been conditioned to use AI for the ‘daily doing’: writing the email, drafting the Slack message, or cleaning a single CSV file. While helpful, this is a linear gain. If it takes you 30 seconds to prompt an AI to save you 2 minutes of writing, you’ve gained 90 seconds. It’s a win, but it’s not a game-changer.
In my opinion, the one thing most knowledge workers miss is that AI’s true superpower isn’t execution; it’s synthesis.
Fuzzy Inputs, Structured Insights
Your life doesn’t happen in neat rows and columns. Life happens in ‘fuzzy’ bursts: a 2-second voice memo about a project risk, a quick daily note about a frustrating meeting, or a “plus/minus/next” entry written while you’re half-caffeinated.
Individually, these are low-value fragments. But collectively, they are the Minimum Viable Context for a massive breakthrough.
The real ‘When to AI’ moment isn’t when you’re staring at a blank email; it’s when you need to connect the dots across 30 days of ‘fuzzy’ data.
- The Human’s Job: Provide the 2-second, frictionless “Fuzzy” inputs. Capture the stress, the ideas, and the next steps without overthinking the structure
- The AI’s Job: Perform the 2-hour heavy lift of periodic synthesis
This is where Explainable AI (XAI) becomes your best friend. A good synthesis doesn’t just say: ‘You had a productive month’. It says, ‘Based on your 12 ‘minus’ notes regarding stakeholder delays, your primary bottleneck is X’. Because the AI can trace its logic back to your daily 2-second inputs, the insight can be validated trustworthy and actionable.
The synthesis challenges
For AI to synthesize ‘fuzzy’ notes accurately, it needs high-quality context. If the inputs are too fuzzy or too low-effort, the AI could produce ‘hallucinated insights’ or generic platitudes. I assumed the AI is a ‘trustworthy architect’, but in reality, synthesis can be where LLMs are most prone to ‘averaging out’ unique insights into generic ‘corporate speak’. On the other hand, with my daily entries being relatively structured, personally, I had great results and, as mentioned, I do tend to ‘trust but verify’ anything important.
Build for Compound Interest
The systems that compound aren’t the ones that are the most sophisticated; they are the ones with the lowest daily friction.
Stop trying to build an ‘automated life’. Instead, build a system where the input is trivially easy, and the synthesis is periodically profound. Start with the ‘low-hanging fruits’ to gain results and motivation to do more. Set up a meaningful flow, review what you have and where you want to be. And take it step by step.
When should you AI? Not just to do the work, but to understand the work you’ve already done.
On privacy
If you’re feeding your raw, ‘fuzzy’ thoughts, private journal entries, or sensitive work ideas into an AI to help you make sense of them, you have to stop and ask where this data is actually going. It’s a big topic, but I will dive into it elsewhere.
- Featured image and illustrations generated with AI (except for the Eisenhower Matrix and RICE)
