Applied AI Readiness Assessment — Concept Exploration
November 20, 2025
The Problem
Business owners and institution leaders hear "you need AI" constantly but don't know:
- If it's actually relevant to their situation
- Where to start
- Whether the ROI is worth it
- If they're ready to implement
We need a way to help them self-assess before they talk to anyone—or to qualify them before we spend time on discovery.
Should It Be a Wizard?
Pros of a wizard:
- Structured, low-friction entry point
- Feels actionable ("take this 5-minute assessment")
- Can capture leads/emails naturally
- Produces a shareable result ("You're a Stage 2 AI Readiness")
- Scales infinitely
Cons of a wizard:
- Can feel gimmicky or like a marketing funnel
- Misses nuance that a conversation would catch
- People might not trust an automated recommendation
- Risk of oversimplifying complex situations
Verdict: A wizard works as a first filter, not a final answer. Position it as "see if Applied AI might be relevant" not "here's your diagnosis." The wizard should end with a human conversation option.
Alternative Approaches to Consider
1. Simple Checklist ("10 Signs You Need Applied AI")
- Lower friction than a wizard
- Shareable as content marketing
- No data capture, but builds trust
- Example: "If you checked 4+, you should talk to someone"
2. ROI Calculator
- More concrete: "Enter your hours spent on X, we'll estimate savings"
- Feels less salesy, more useful
- Requires knowing what to measure (harder for uninformed users)
3. Pain Point Diagnostic
- Start with symptoms, not solutions
- "What's frustrating you?" → map to AI opportunities
- Feels more like a consultation
4. Hybrid: Checklist → Wizard → Human
- Checklist as blog post (awareness)
- Wizard as deeper assessment (consideration)
- Human call as final step (decision)
If We Build a Wizard: Key Design Decisions
What are we assessing?
1. Problem Fit — Do they have problems AI can actually solve?
- Repetitive tasks consuming staff time
- Communication bottlenecks
- Data scattered across systems
- Manual processes that feel outdated
2. Readiness — Are they able to implement?
- Leadership buy-in
- Budget availability
- Technical infrastructure (or willingness to build)
- Change management capacity
3. Urgency — How pressing is this?
- Competitive pressure
- Cost pressure
- Growth constraints
- Compliance/regulatory demands
Question Flow (Draft)
Section 1: Current Pain Points
-
How much time does your team spend on repetitive administrative tasks per week?
- Less than 5 hours
- 5-20 hours
- 20-50 hours
- 50+ hours
-
Which of these describes your biggest operational frustration? (select all)
- Staff doing manual data entry
- Slow response times to customers/clients
- Information scattered across systems
- Scheduling/coordination headaches
- Reporting takes too long
- None of these
-
Have you tried to solve these problems before?
- Yes, with software tools (mixed results)
- Yes, by hiring more people
- No, we've just lived with it
- We don't have major problems
Section 2: AI Exposure 4. How would you describe your team's current use of AI tools?
- We don't use any AI tools
- A few individuals use ChatGPT or similar
- We've experimented but nothing stuck
- We have some AI tools integrated into workflows
- What's your biggest concern about AI?
- It won't actually work for our use case
- It's too expensive
- My team won't adopt it
- Security/privacy concerns
- I don't know where to start
- I'm not concerned, I'm ready
Section 3: Readiness 6. Do you have budget allocated for operational improvements this year?
- Yes, significant budget
- Yes, but limited
- No, but could find it for the right solution
- No budget available
-
If we found a solution that saved 10+ hours per week, who would need to approve it?
- Just me
- Me + one other person
- A committee or board
- I'm not sure
-
How would you describe your organization's appetite for change?
- We move fast and try new things
- We're cautious but open
- We're slow to change
- We're actively resistant to change
Scoring Logic (Simplified)
High Fit + High Readiness → "You're a strong candidate for Applied AI. Let's talk." High Fit + Low Readiness → "AI could help, but you may need to build internal readiness first. Here's how." Low Fit + High Readiness → "You might not need AI yet—but here's what to watch for as you grow." Low Fit + Low Readiness → "Focus on other priorities first. Here are some resources to revisit later."
Output Options
Option A: Score + Recommendation
- "Your Applied AI Readiness Score: 72/100"
- Personalized recommendation paragraph
- Specific next steps based on their answers
Option B: Archetype Assignment
- "You're a Ready Operator" or "You're a Curious Explorer"
- Makes it shareable/memorable
- Risk: feels like a BuzzFeed quiz
Option C: Prioritized Opportunity List
- "Based on your answers, here are 3 areas where AI could help most:"
- Customer communication (high impact, low effort)
- Reporting automation (medium impact, medium effort)
- Scheduling optimization (high impact, high effort)
- Most actionable, least gimmicky
Recommendation: Option C feels most aligned with the Applied AI Society ethos—practical, specific, not hype-y.
Technical Implementation Options
1. Simple Form Tool (Typeform, Tally, etc.)
- Fast to build
- Limited logic/personalization
- Good for MVP
2. Custom Web App
- Full control over UX and logic
- Can integrate with CRM
- More work to build
3. AI-Powered Conversation
- Chatbot-style assessment
- More natural, can handle nuance
- Feels more "Applied AI Society"—eating our own cooking
Recommendation for MVP: Start with Typeform or similar. Test with 20-30 people. Learn what questions matter. Then decide if a custom build is worth it.
Next Steps
- Validate the questions — Run draft questions by 5 business owners. Do they understand them? Do the options feel relevant?
- Build MVP — Create in Typeform with basic branching logic
- Test and iterate — See what results people get, whether they feel accurate
- Decide on output format — Based on feedback, pick score vs. archetype vs. opportunity list
- Connect to funnel — High-readiness results → offer a call; low-readiness → offer resources
Open Questions
- Should this be gated (email required) or open?
- Do we show results immediately or email them?
- Should there be different versions for different industries (schools, services, retail)?
- How do we prevent people gaming the quiz to get a "good" result?
- Is "wizard" even the right word? "Assessment" or "diagnostic" might feel less gimmicky.