Enterprise

AI Rules Lunch and Learn Format

Monthly lunch-and-learn sessions keep AI standards top of mind, share best practices, and provide a forum for feedback. This guide covers the format, content rotation, and how to keep sessions engaging month after month.

5 min read·July 5, 2025

30 minutes, monthly, no mandatory attendance. The lunch-and-learn keeps AI standards top of mind without creating meeting fatigue.

Rule spotlights, developer stories, tips and tricks, rotating presenters, and long-term content sustainability

30 Minutes, Monthly, No Slides Required

The AI rules lunch-and-learn: a monthly 30-minute session that keeps AI standards in the organizational conversation. The format is intentionally lightweight: no mandatory attendance (pull, not push), no lengthy presentations (30 minutes maximum), and no formal structure (rotating formats keep it fresh). The goal: developers leave each session with one new technique, one new insight, or one new rule they can apply immediately.

Why monthly: weekly is too frequent (developers experience meeting fatigue), quarterly is too infrequent (momentum is lost between sessions), monthly is the sweet spot (regular enough to maintain awareness, infrequent enough that each session feels valuable). The session runs during lunch — developers eat while watching/listening. No extra time commitment required.

The audience: all developers are invited, but attendance is voluntary. Typical attendance: 30-50% of the engineering org. This is normal and healthy — mandatory attendance creates resentment. The developers who attend: are the most engaged, provide the best feedback, and become the informal champions who share insights with their teams.

Content Rotation Calendar

Month 1 — Rule spotlight: deep dive into 1-2 rules that were recently added or updated. Why the rule exists, what it prevents, and a live demo showing the AI's behavior with and without the rule. This format: educates developers on specific rules and demonstrates their value. AI rule: 'The rule spotlight is the most educational format. Developers who understand why a rule exists: follow it willingly instead of grudgingly.'

Month 2 — Developer story: a developer from any team presents their AI rules experience. Format: 5 minutes of story (problem, change, result), 5 minutes of live demo, 20 minutes of Q&A. This format: provides social proof (peer experience is more convincing than platform team presentations) and surfaces real-world insights. AI rule: 'Developer stories are the most engaging format. The presenter shares genuine experience, not corporate messaging. Attendance is typically highest for story sessions.'

Month 3 — Tips and tricks: rapid-fire format. 5 speakers, 5 minutes each, each sharing one AI rules tip. Tips can be: a productivity shortcut, a rule authoring technique, a debugging approach for AI output, or a creative use of rules. This format: high energy, lots of variety, something for everyone. AI rule: 'Tips sessions are the most shareable. Developers leave with 2-3 immediately applicable techniques. The tips spread through teams via Slack after the session.'

💡 Developer Stories Get the Highest Attendance

Platform team presenting metrics: 30% attendance. A developer from the Payments team sharing how AI rules caught a currency precision bug before it hit production: 50% attendance. Developers want to hear from peers, not from governance teams. The story format gets the highest attendance, the most engagement, and the most post-session Slack discussion. Schedule developer stories every 3rd session at minimum.

Keeping Sessions Engaging

Live coding > slides: developers engage with live coding more than slide presentations. Show the AI generating code, encountering a rule, and producing the correct output. Show what happens when a rule is removed. The visual contrast: more memorable than any slide. AI rule: 'Minimize slides (5 max for context). Maximize live coding (at least 15 of the 30 minutes). If the presenter cannot demo live: pre-record a coding session and narrate.'

Interactive elements: polls (use Slack polls during the session — 'Which rule saves you the most time?'), live Q&A (dedicate the last 10 minutes to questions), and challenges (pose a coding challenge at the end that developers try with AI rules and share results in Slack). AI rule: 'Interactivity prevents passive consumption. A session where developers just watch: forgotten by end of day. A session where developers vote, ask, and try: remembered and applied.'

Rotating presenters: the platform team presents some sessions, but most should be presented by developers from different teams. Benefits: diverse perspectives, distributed ownership (it is not just the platform team's event), and reduced preparation burden on any single person. AI rule: 'Recruit presenters 4 weeks ahead. Offer support: help with prep, provide a template, do a dry run. Most developers are willing to present if they feel supported and the time commitment is small (30 minutes of prep for a 5-minute segment).'

ℹ️ 15 Minutes of Live Coding > 30 Minutes of Slides

A slide that says 'AI rules improve error handling consistency': informational. A live demo where the presenter removes the error handling rule, shows the AI generate inconsistent patterns, then adds the rule back and shows consistent output: convincing. Live coding creates the before/after contrast that slides cannot. Even if the demo has a minor glitch: it is more engaging than perfect slides. Developers trust live demos because they cannot be faked.

Sustaining the Series Long-Term

Content pipeline: maintain a backlog of session topics sourced from: quarterly rule reviews (new rules and changes to spotlight), developer feedback (topics developers want to learn about), AI tool updates (new features in Claude Code, Cursor, Copilot), and industry trends (new techniques for AI-assisted coding). AI rule: 'A backlog of 6+ topics ensures: you never scramble for content the week before a session. Add topics as they come up — the backlog is always growing.'

Avoiding repetition: after 12 months, the content rotation has cycled 4 times. To keep sessions fresh: introduce new formats (panel discussion, debate, workshop segment), cover new topics (advanced rule writing, cross-team rule sharing, AI tool migration), and invite external speakers (AI tool vendors, developers from other companies who share their experience). AI rule: 'Novelty sustains attendance. If every session feels the same: attendance drops. If each session offers something unexpected: attendance sustains or grows.'

Measuring success: track attendance trends (growing or declining?), session ratings (anonymous 1-5 rating after each session), and knowledge application (quarterly survey — 'Did you apply something you learned at a lunch-and-learn in the past 3 months?'). If metrics decline: refresh the format, solicit new topics, or bring in new presenters. AI rule: 'The lunch-and-learn series succeeds when: attendance is stable or growing, ratings are 4.0+, and developers report applying what they learned. If any metric declines: investigate and adjust.'

⚠️ Same Format Every Month = Declining Attendance

Month 1-3: rule spotlight. Attendance: 45%. Month 4-6: still rule spotlight. Attendance: 30%. Month 7-9: still rule spotlight. Attendance: 15%. The format was good — but repetition kills engagement. Rotate between 3-4 formats (spotlight, story, tips, workshop). After 12 months: introduce new formats (panels, debates, external speakers). Novelty sustains attendance. Predictability kills it.

Lunch and Learn Summary

Summary of the AI rules lunch-and-learn format.

  • Format: 30 minutes, monthly, voluntary attendance. During lunch — no extra time commitment
  • Rotation: rule spotlight (month 1), developer story (month 2), tips and tricks (month 3). Repeat
  • Engagement: live coding > slides. Polls, Q&A, and challenges for interactivity
  • Presenters: rotating across teams. Platform team presents some, developers present most
  • Content pipeline: 6+ topics backlog from reviews, feedback, tool updates, industry trends
  • Freshness: new formats after 12 months (panels, debates, workshops, external speakers)
  • Metrics: attendance trends, session ratings (4.0+ target), knowledge application survey
  • Goal: one new technique, insight, or rule per session that developers apply immediately