Mega-Prompt: High-Impact Executive Keynote Generator
— START OF PROMPT —
PART 1: The Persona & Goal
Act as an expert keynote speechwriter and world-class presentation coach. Your client is a senior executive preparing for a major conference. Your goal is to transform their core “big idea” into a powerful, memorable, and high-impact 24-minute keynote speech in the style of the best TED Talks. You will use the information they provide below to create a comprehensive presentation package.
This section contains the raw material for the speech. Please provide thoughtful and detailed answers.
1. The Big Idea:
- What is your central idea, stated in a single, compelling sentence?
- Treat AI as a learning system, formalize what workers already do well, and rewire one workflow at a time where the ROI actually lives.
2. The 6 Foundational Questions:
- Question 1: The Pain Point: What specific, nagging, and urgent pain point does your big idea solve for the audience?
- Leaders are stuck in an exhausting paradox: massive opportunities for AI in the enterprise, Massive Build Out Costs for traditional work teams to build AI, Change and release schedules for traditional IT companies is annual releases, Change and Release Rates for AI Companies is weekly or even daily, minimal enterprise‑level P&L impact, and rising risk from unmanaged, bottom‑up usage. The pain is the mismatch—we’ve funded pilots that don’t learn, while our individual users have already learned how to use AI in the flow of work. Official deployments stall; unofficial ones thrive. Headlines fixate on failure; reality is quietly changing in the inbox, the ticket queue, and the spreadsheet.
- Question 2: The Confusion: What common misunderstanding, myth, or confusion does your big idea correct?
- “AI is failing because 95% of pilots fail.” Reality: That statistic applies to custom, top‑down enterprise builds—not to how people actually work. Meanwhile, a bottom‑up shadow AI economy is thriving: employees use personal AI at far higher rates than official adoption figures suggest.
- “The models aren’t ready.” Reality: The core failure is missing learning loops—tools that don’t retain feedback, adapt to context, or integrate with real workflows. Fix the loop, and value shows up.
- “Model Swaping will save my Solution.” The cheapest performance upgrade isn’t a bigger model—it’s a better loop: human review in, context remembered, workflow-aware execution out.
- “ChatGPT for GPT-4o is the same thing as OpenAI GPT-4o services.” They share the same underlying model, but they’re not the same product. ChatGPT is the packaged app with guardrails, memory, and a UI layer; GPT-4o services are the raw model APIs you wire into workflows. One is a polished consumer tool, the other is a developer platform—confusing them leads to mismatched expectations.
- “Value lives in flashy front‑office use cases.” Reality: The best ROI hides in back‑office seams—service ops, document flows, procurement—where automation cuts external spend and cycle time, quietly and materially.
- “Governance means clamping down on shadow AI.” Reality: Smart orgs learn from shadow usage, then sanction and secure what already works—turning quiet wins into safe, supported standards.
- “Build > Buy.” Reality: In practice, external partnerships reach deployment far more often than internal builds—when vendors are held to operational outcomes, not demo metrics.
- “Productivity gains require layoffs.” Reality: Early gains show up as reduced external spend (BPO, agencies) and faster throughput—without broad workforce reduction.
- Question 3: The Knowledge Gap (Authority): What does the audience think they know about this topic that is incomplete or wrong? What is the crucial gap in their knowledge that you will reveal?
- What many believe: “AI isn’t paying off, so the models must not be ready.” What’s missing: It’s not the models; it’s the learning loop. Most enterprise tools don’t have a change and release schedule where they can update apps on a weekly or daily basis, they don’t retain feedback, adapt to context, extend trust and earn honest feedback or improve with use—so they feel brittle and over‑engineered. Meanwhile, workers choose flexible, general‑purpose tools that “learn with them” (or at least feel like they do) and slot into existing rhythms. Add to that a budget bias toward front‑office sizzle over back‑office throughput, and you get spectacular demos from product owners up to executives that have a weak P&L. The gap is process, not promise. You need to rewire workflows and build trust, elicit and value feedback, memory, and integration those into the system. When this is done, value will show up.
- Question 4: The Personal Stake (Rapport): What was missing in your own professional life or organization before you discovered/implemented this idea? Share a brief, personal story of your “before” state.
- Before I learned this, I was in charge of the Copilot for M365 rollout. Most of the feedback I got for the tool was that it sucked and it couldn’t be made to do any good work. The tool itself was impressive in a couple good ways. We could demo the meeting transcripts. We could demo the email writing. I wrote a bunch of prompts for people to get requirements out of meetings. I wrote a bunch of prompts to do quality reviews. But overall the big winners were people in the business who showed our training materials could be generated with Copilot. They showed how months of work could be reduced into a weeks worth of work. What we learned was that if people trusted the AI, worked every day to get better at it, and decided to provide vendor feedback using the built in tools, things became constantly better.
- The turning point came when we started to work on the RIMS/RCMS activity. This wasn’t a simple architecture or a simple application. The team spent 6 - 9 months working with the support staff on a daily basis to tweak and modify the solution. Every moment was spent working with them to make the overall application better. Changes were not daily, but there were changes every other day. The big change was when we stopped blaming “immature models,” acknowledged our data, process, and technology gaps, and listened to our users. Not the proxy users we had, but the actual people who needed to get work done. We were meeting with the process owners, and their people all the time, and we trusted them, and they trusted us. We learned what actually worked, and said the hard words to leadership: our people create value, not some demo to try to convince people to do the work.
- Question 5: The Improvement Story (Vision): Briefly describe a specific, real-world example of how your idea has tangibly improved someone else’s life, team, or company. This should be a story.
- The MIT report, has a bunch of reports that mirror work we are actually doing. We are using AI to do legal work. We are using AI in Customer Service (both front office and back office areas). We are reducing the amount of spend on external agencies to write and validate copy. There are so many other ideas we have been working on in this space, but none of them are flashy demo ware. They are actual operational changes that are improving lives for people on the ground. This is not hype; it’s quiet operational excellence—achieved by embracing tools that adapt to the work, not forcing workers to adapt to some new tool.
- Question 6: The Execution Steps: What are the 3 most critical, high-level steps to execute your big idea? Keep them simple, memorable, and action-oriented.
- Step 1: Surface -> Action -> Secure. Talk to actual users who are doing good work, and actually see what they are doing. Do not run experiments with consultants or avoid SMEs or Process Owners. Then inventory real usage, patterns, and wins; convert the best into into sanctioned and approved use cases; wrap with lightweight guardrails (data boundaries, citation norms, review triggers). The goal is not crackdown; it’s clarity and safety around what already works.
- Step 2: Rewire One Workflow End‑to‑End. Pick an unglamorous, high‑volume process (work triage, exceptions handling, case intake). Replace point solutions with a learning‑capable loop: human feedback captured by design, context memory (retrieval over your docs), and integration into the actual system of record. Measure throughput, cycle time, error rate, etc.. etc… all those other standard process / Operational Excellence numbers from Six Sigma and external spend—the P&L will follow.
- Step 3: Buy, Then Build (with Outcomes, Not Demos). Start with partners who can reach deployment now. Stick with parters who will let you change things on weekly or daily basis; hold teams to operational outcomes (tickets closed, invoices cleared, minutes saved), not some benchmarks. Only build internally where you have durable advantage such as process uniqueness or regulatory need based on data soverignity. The odds are simply better this way—about 2× higher deployment success.
PART 3: The AI’s Task (Your Deliverables)
Based only on the executive’s input above, generate the following three deliverables. Maintain the persona of a master speechwriter throughout.
Deliverable 1: The Detailed Speech Outline (24-Minute Structure)
Create a detailed outline specifying duration, purpose, key message, and rhetorical elements for each section (Hook, Authority, Rapport, Main Points, Vision & CTA).
Deliverable 2: The Full Keynote Speech Script
Write the complete, word-for-word script. Write for the ear, not the eye, and include stage directions like [PAUSE]
.
Deliverable 3: Slide & Visual Element Suggestions
Create a table with columns for Section
, Slide Concept
, and Suggested Visual Elements
to provide a clear plan for the visual presentation.
Deliverable 4: Abstract
An abstract should be a concise, engaging summary that explains what the talk is about, why it matters to the audience, and what practical value or insights they will take away.
Deliverable 5: Blog Post
A short, compelling statement that summarizes the main topic, highlights the unique perspective or value it offers, and gives readers a clear reason to click and read further. Less then 9,000 characters.