What are Review Cycles?

Review cycles (formerly called 360-degree feedback) gather confidential, multi-rater insights about a teammate's impact. Instead of relying only on a manager's perspective, a review cycle includes voices from every direction: managers, peers, direct reports, cross-functional partners, and even a quick self reflection.

The Multi-Rater Perspective

  • Manager Feedback: How the person performs on job responsibilities
  • Peer Feedback: How well they collaborate and support colleagues
  • Direct Report Feedback: Leadership style and support (if applicable)
  • Self Assessment: How they see their own strengths and development areas

Why Use Review Cycles?

Comprehensive Perspective

Different people see different things. A manager sees task completion, peers see collaboration, direct reports see leadership. Together, they paint a complete picture.

Identify Blind Spots

We often don't see ourselves as others do. Review cycle feedback reveals gaps between self-perception and how others perceive us—this is where growth happens.

Better Development Plans

With feedback from multiple sources, you can create more targeted, effective development plans focused on what actually matters to the team.

Stronger Culture

Review cycle feedback demonstrates that feedback and learning are valued by the organization. It creates accountability and psychological safety.

Reduced Bias

Multiple raters reduce the impact of individual bias compared to single-source feedback.

Getting Started with Review Cycles

Step 1: Plan Your Campaign

  1. Decide who will participate:
    • Entire organization?
    • Specific departments or levels?
    • Just leadership team?
  2. Determine review period (annual, bi-annual, etc.)
  3. Set dates with 2-3 weeks' notice to participants
  4. Plan how results will be used (development, decisions, etc.)
  5. Ensure confidentiality and anonymity will be maintained

Step 2: Set Up in Sizemotion

  1. Go to Performance > Review Cycles
  2. Click "New Campaign"
  3. Name your campaign (e.g., "2025 Leadership Review Cycle")
  4. Choose or customize the feedback questions
  5. Set dates:
    • Invitation date: When reviewers get invited
    • Due date: When feedback must be submitted
  6. Select participants (reviewees)

Step 3: Select Reviewers

For each person being reviewed, identify their reviewers:

  • Manager: Direct supervisor (required)
  • Peers: 4-6 colleagues at same level (recommended)
  • Direct Reports: Team members they manage (if applicable)
  • Self: Self-assessment (always included)

Pro Tip: Aim for 6-8 raters total for good feedback quality. Too few and you miss perspectives; too many and feedback becomes generic.

Step 4: Communicate & Launch

  1. Send communication explaining:
    • What review cycles are and why you're running them
    • How feedback will be used
    • Confidentiality and anonymity commitment
    • Timeline and key dates
  2. Review all settings in Sizemotion
  3. Click "Launch Campaign"
  4. Reviewers receive invitations

Step 5: Monitor Progress

  • Track submission rates in the dashboard
  • Send reminders to non-respondents as due date approaches
  • Be prepared to extend deadlines if needed

AI-Powered Feedback Analysis

Sizemotion uses artificial intelligence to analyze multi-rater feedback and generate comprehensive summaries, helping managers and HR professionals quickly understand patterns, themes, and actionable insights without spending hours reading through individual responses.

What is AI Feedback Analysis?

The AI Feedback Analysis feature automatically processes all review cycle responses—including quantitative ratings, open-ended text feedback, self-assessments, and peer comments—to identify key themes, strengths, development areas, blind spots, and sentiment patterns. This transforms hours of manual analysis into a few seconds of intelligent processing.

🤖 Intelligent Processing: Our AI analyzes feedback from all perspectives (manager, peers, direct reports, self), identifies patterns across quantitative and qualitative data, detects sentiment and emotional tone, highlights gaps between self-perception and others' views, and surfaces actionable development priorities.

Two Types of AI Analysis Available

1. Feedback Summary

Get an AI-generated comprehensive summary of all feedback received for a specific participant.

How to Generate:

  1. Navigate to Performance > Review Cycles
  2. Select a completed campaign (all or most feedback submitted)
  3. Click on a specific participant to view their results
  4. Click the "Feedback Summary" button
  5. The AI analyzes all feedback received for that person (typically takes 15-30 seconds)
  6. Review the comprehensive summary with insights and recommendations

What's Included:

  • Executive Summary: 2-3 sentence overview of overall feedback received
  • Key Strengths: Top 3-5 areas where feedback was consistently positive
  • Development Opportunities: Areas for growth identified across multiple raters
  • Perspective Comparison: How self-assessment aligns with others' views
  • Blind Spot Analysis: Significant gaps between self and others' perceptions
  • Sentiment Analysis: Overall tone and emotional context of feedback
  • Coaching Priorities: What to focus on in development conversations
  • Actionable Recommendations: Specific development actions with priorities
  • Conversation Starters: How to discuss sensitive feedback topics

Best For: Managers preparing for feedback conversations, HR conducting development planning, participants seeking self-awareness and understanding all the feedback they received

2. Theme Analysis

Identify common themes and patterns across all feedback in the campaign.

How to Generate:

  1. Navigate to Performance > Review Cycles
  2. Select a completed campaign
  3. From the campaign overview, click "Theme Analysis"
  4. The AI analyzes feedback across all participants to identify patterns (typically takes 30-60 seconds)
  5. Review organization-wide themes and insights

What's Included:

  • Common Themes: Recurring patterns appearing across multiple participants
  • Organizational Strengths: Company-wide areas of excellence
  • Development Needs: Competency gaps needing organizational attention
  • Cultural Insights: What feedback patterns reveal about team dynamics and culture
  • Leadership Patterns: Common themes in how leaders are perceived
  • Skill Gaps: Training or development program opportunities
  • Recognition Opportunities: High performers to celebrate publicly
  • Risk Indicators: Patterns that may signal cultural or performance issues
  • Recommended Initiatives: Suggested organization-wide actions based on themes

Best For: HR teams planning development programs, leadership understanding organizational health, identifying company-wide training needs, spotting cultural trends

💡 Pro Tip: Generate Theme Analysis first to understand organization-wide patterns, then generate individual Feedback Summaries for participants. This helps you contextualize individual feedback within broader company trends. Both analyses are cached for 24 hours and can be regenerated if new feedback arrives.

What AI Analysis Includes

Quantitative Synthesis

  • Average Scores: Mean ratings across all questions and rater groups
  • Score Distribution: Range and variance in ratings
  • Rater Agreement: How consistently raters scored each area
  • Comparison to Norms: How scores compare to team/company averages
  • Highest/Lowest Rated: Clear standout strengths and weaknesses

Qualitative Theme Extraction

  • Recurring Keywords: Words and phrases mentioned frequently
  • Thematic Categories: Grouped feedback by topic (communication, leadership, etc.)
  • Example Quotes: Representative verbatim feedback for each theme
  • Context Understanding: Why themes emerged and what they mean

Perspective Comparison

  • Self vs Others: Gaps between self-assessment and peer/manager ratings
  • Upward vs Peer: How direct reports see them vs peers
  • Manager vs Team: Manager's view compared to collective team feedback
  • Blind Spot Identification: Areas they don't see but others do

Sentiment & Tone Analysis

  • Overall Sentiment: Positive, neutral, or negative tone
  • Emotional Indicators: Enthusiasm, concern, frustration, appreciation
  • Language Intensity: Strong vs moderate feedback
  • Constructive vs Critical: Tone of development suggestions

Actionable Recommendations

  • Priority Ranking: Which areas to focus on first and why
  • Specific Actions: Concrete steps to improve in each area
  • Resources: Training, coaching, or learning opportunities
  • Quick Wins: Easy improvements that would have high impact
  • Long-term Development: Sustained growth areas requiring time

Example AI Summary Output

AI Summary: Alex Martinez - Q4 2025 Leadership Review

Executive Summary:

Alex receives strong positive feedback for technical expertise and individual contributions, with consistent praise for problem-solving abilities and code quality. However, there's a notable pattern across peer and direct report feedback highlighting opportunities to improve communication style, delegation practices, and creating space for others' ideas in meetings.

Key Strengths (Consistent across 8/10 raters):

  • Technical Excellence: "Go-to person for complex architectural decisions" (Manager + 5 peers)
  • Problem Solving: "Breaks down complicated problems brilliantly" (3 direct reports + 4 peers)
  • Work Quality: "Delivers exceptional, well-documented code" (Manager + 6 peers)
  • Availability: "Always willing to help when I'm stuck" (4 direct reports)

Development Opportunities (Priority Order):

  1. Communication Style (Mentioned by 7/10 raters)
    Theme: Tendency to interrupt, dominate technical discussions, jump to solutions quickly
    Example Quotes:
    • "Sometimes cuts people off mid-sentence when he already knows the answer" (Peer)
    • "Moves very fast in meetings - hard to keep up or contribute" (Direct Report)
    • "Could benefit from slowing down to hear others' perspectives" (Manager)

    Impact: Team members feel less heard; junior engineers hesitant to speak up
  2. Delegation & Trust (Mentioned by 4/5 direct reports)
    Theme: Difficulty letting go of technical decisions; tendency to take over when progress is slow
    Example Quotes:
    • "Sometimes takes tasks back instead of coaching me through" (Direct Report)
    • "Would love more autonomy on architecture choices" (Direct Report)

    Impact: Limits team's growth; creates bottleneck; may contribute to burnout
  3. Strategic Thinking (Mentioned by manager + 2 peers)
    Theme: Strong on tactical execution but could zoom out more
    Example Quote: "Would benefit from more focus on longer-term strategy vs firefighting" (Manager)

Blind Spot Analysis:

Communication Impact: Alex rates himself 4.5/5 on "Creates space for others' ideas" but receives 2.8/5 from direct reports and 3.1/5 from peers. This 1.5+ point gap indicates a significant blind spot - he doesn't realize how his communication style affects team dynamics.

Sentiment Analysis:

Overall sentiment is positive with constructive concern. Raters clearly respect Alex's technical ability and want him to succeed. Feedback tone is supportive ("would love to see", "opportunity to grow") rather than critical. No negative emotion detected - team is invested in his development.

Recommended Actions (Priority Order):

  1. Communication Skills Workshop (Quick Win)
    Focus: Active listening, asking vs telling, pausing before responding
    Timeline: Within 30 days
  2. Weekly 1:1 Coaching on Delegation (Sustained Growth)
    Focus: Building trust, coaching through problems, resisting urge to take over
    Timeline: Next 3 months with manager
  3. 360 Practice: "Meeting Observation" (Immediate)
    Have trusted peer observe next team meeting and provide specific feedback on when/how he interrupts
    Timeline: Next team meeting
  4. Strategic Thinking Program (Long-term)
    Senior leader mentorship on balancing tactical vs strategic work
    Timeline: Q1 2026

Conversation Starters for Manager:

  • "Your technical expertise shines through in all the feedback. Several people mentioned you're their go-to for complex problems. How does that feel?"
  • "There's an interesting gap I'd like to explore: you rated yourself highly on creating space for others, but your team sees this differently. What do you think might explain that difference?"
  • "Your direct reports really want to learn from you. Let's talk about how we can channel your expertise into coaching rather than doing..."

Best Practices for Using AI Analysis

Preparation & Generation

  1. Wait for Complete Data: Generate summaries after 80%+ of feedback is submitted
  2. Review Raw Data First: Glance at individual responses to build context
  3. Generate for All Participants: Consistent analysis across the team
  4. Regenerate if Needed: If late feedback arrives, refresh the summary

Interpreting Results

  • Context Matters: AI provides patterns, but you know the full story
  • Verify Themes: Cross-reference AI insights with actual feedback quotes
  • Consider Sample Size: More raters = more reliable pattern detection
  • Note Outliers: Strong feedback from one rater may signal something important
  • Cultural Nuance: AI may miss cultural context - apply your judgment

Using with Participants

  • Share Feedback Summary: The individual analysis is designed to be shared with the employee
  • Discuss Together: Walk through AI summary in person, don't just email
  • Focus on Patterns: Emphasize themes across multiple raters
  • Validate Their Experience: Ask if AI insights resonate with their self-perception
  • Co-create Action Plan: Use AI recommendations as starting point
  • Reference Theme Analysis: Contextualize individual feedback within organization-wide patterns

For Development Planning

  • Prioritize Blind Spots: Greatest growth comes from addressing what they don't see
  • Balance Quick Wins + Long-term: Mix immediate actions with sustained development
  • Leverage Strengths: How can strengths compensate for weaknesses?
  • Track Progress: Reference AI summary in future 1:1s to measure change
  • Compare Next Cycle: AI can detect improvement patterns over time

AI Analysis for Different Roles

For HR/People Ops

  • Theme Analysis: Generate campaign-wide insights across all participants
  • Risk Identification: Quickly spot concerning patterns needing escalation
  • Development Programs: Identify common skill gaps for training initiatives
  • Culture Insights: What does feedback reveal about team health?
  • Process Improvement: Which questions generate most valuable feedback?
  • Individual Summaries: Generate Feedback Summaries for participants requiring attention

For Managers

  • Feedback Summary Generation: Create comprehensive summaries for each direct report
  • Prep for Feedback Conversations: Clear talking points and priorities from AI insights
  • Coaching Focus: Specific behaviors and impacts to discuss
  • Development Planning: Actionable recommendations to explore
  • Team Patterns: Compare individual summaries to spot team-wide themes
  • Progress Tracking: Reference summaries in ongoing 1:1s

For Participants

  • Self-Awareness: Understand how others perceive them
  • Blind Spot Discovery: Learn about gaps in self-perception
  • Clear Priorities: Know what to focus on first
  • Validation: Confirm strengths with data
  • Actionable Steps: Concrete suggestions to improve

Privacy & Confidentiality

  • Anonymity Preserved: AI analysis never attributes feedback to specific raters
  • Aggregated Insights: Patterns and themes, not individual comments (unless sample size is very small)
  • Access Control: Manager reports only visible to managers and HR, not participants
  • Secure Processing: Feedback data encrypted during AI analysis
  • Data Retention: Summaries stored securely, separate from raw feedback
  • No External Training: Your feedback data never used to train external AI models

When AI Analysis Provides Most Value

High-Impact Scenarios

  • Large Review Cycles: 20+ participants where manual analysis is overwhelming
  • Senior Leadership: Complex feedback requiring nuanced interpretation
  • Performance Concerns: Objective analysis of critical feedback
  • High Potentials: Detailed development planning for future leaders
  • First-Time Managers: Clear coaching priorities for new leaders
  • Post-Promotion: How someone is adapting to new role

Limited Value Scenarios

  • Very Few Raters: Less than 3-4 responses may not show patterns
  • Incomplete Feedback: Mostly ratings without qualitative comments
  • Short-Form Surveys: Limited data for AI to analyze
  • Single-Question Campaigns: Not enough dimensionality

Combining AI with Human Judgment

✅ Best Approach: Use AI analysis as a powerful starting point, but always combine it with your knowledge of context, relationships, recent events, and organizational dynamics. AI excels at pattern detection; humans excel at wisdom and empathy. Together, they create the most effective feedback experience.

Task AI Strength Human Strength
Pattern Detection ✅ Excellent - spots themes instantly ⚠️ Good but time-consuming
Context Understanding ⚠️ Limited to text provided ✅ Knows full backstory
Sentiment Analysis ✅ Consistent and objective ✅ Picks up subtle nuance
Actionable Recommendations ✅ Evidence-based suggestions ✅ Tailored to individual
Coaching Conversation ⚠️ Provides talking points ✅ Builds relationship and trust
Bias Detection ✅ No personal bias ⚠️ May have unconscious bias

Troubleshooting AI Analysis

"The AI summary seems generic"

This usually indicates:

  • Limited qualitative feedback (mostly ratings, few comments)
  • Very similar feedback across all raters
  • Short-form responses without detail
  • Solution: Encourage raters to provide specific examples and detailed comments in future cycles

"AI missed an important theme I see in the data"

  • AI prioritizes patterns mentioned by multiple raters - single mentions may be filtered
  • Context or subtext may not be obvious from text alone
  • Solution: Add your observations to the summary and share combined insights

"Blind spot analysis shows large gaps but I don't see it"

  • Verify sample size - gaps with <3 raters may not be reliable
  • Consider if self-assessment questions match what others were rating
  • Look at raw feedback to understand the gap context
  • Solution: Discuss with employee to explore their perspective

"The recommendations don't fit this person"

  • AI provides general best practices - you customize for the individual
  • Consider organizational resources and context
  • Solution: Use recommendations as inspiration, adapt to reality

💡 Remember: AI analysis is a tool to enhance your judgment, not replace it. You bring context, relationships, organizational knowledge, and empathy that AI cannot replicate. Use AI to save time on pattern detection and generate ideas, then apply your human wisdom to create meaningful development conversations.

Best Practices for Effective Review Cycles

📋 Design

  • Keep it short: 10-15 questions max. Longer surveys have lower completion rates.
  • Use behavior-focused questions: "Communicates clearly" not "Is a good person"
  • Include open-ended questions: "What's one thing they should develop?" allows rich feedback
  • Focus on what matters: Tailor questions to role and competencies
  • Test the survey: Have someone try it before launching

🗣️ Communication

  • Explain purpose: People respond better when they understand why
  • Promise confidentiality: Reassure reviewers their feedback is anonymous
  • Set expectations: How will results be used? Will it affect compensation?
  • Make it easy: Mobile-friendly, can save drafts, clear instructions
  • Follow up: Thank participants and share how insights are being used

✍️ Writing Effective Feedback

  • Be specific: Reference actual situations and behaviors
  • Balance: Include both what they do well and development areas
  • Be constructive: Feedback should help them improve, not tear them down
  • Use examples: "In the Q3 launch meeting, they jumped to solutions without hearing concerns"
  • Focus on impact: How does this behavior affect the team?

🔐 Confidentiality

  • Keep feedback anonymous: Don't reveal who said what
  • Report aggregates: Show themes and patterns, not individual responses
  • Small group rule: With fewer than 3 reviewers, feedback must be even more carefully shared
  • Store securely: Restrict access to HR and the participant

Analyzing & Using Results

Understanding the Data

  1. Read all feedback: Get full picture before jumping to conclusions
  2. Look for patterns: What themes emerge across multiple raters?
  3. Compare perspectives: Where does the person's self-view differ from others? This is important!
  4. Note surprises: Unexpected feedback often points to blind spots
  5. Identify strengths: Where is there consistent positive feedback?

Development Conversations

  1. Schedule in person: Don't just email results
  2. Ask their thoughts first: "How do you think you did?" before showing feedback
  3. Focus on development: "Here's how to get even better" not "You failed at X"
  4. Identify 1-2 focus areas: Too many priorities = no progress
  5. Create action plan: What will they do differently? How will they practice?

Red Flags to Address

Some feedback patterns suggest immediate action:

  • Consistent criticism: Multiple raters all mentioning the same issue
  • Safety concerns: Harassment, discrimination, or abuse allegations
  • Large self-perception gaps: Person thinks they're great but others disagree
  • Potential derailments: Trust, integrity, or behavior issues

Have HR review these patterns and consider if coaching, intervention, or further discussion is needed.

Using Results for Development

  • Coaching: Work with a coach on specific areas
  • Training: Communication, leadership, technical skills
  • Mentoring: Learn from someone strong in this area
  • Stretch assignments: Practice new skills in safe environment
  • Job changes: Sometimes role alignment matters more than development

Frequently Asked Questions

How do I ensure honest feedback?

Confidentiality and anonymity are key. Also choose raters who work closely with the person so they have real observations to share.

What if someone gets very negative feedback?

Treat it seriously but with compassion. Meet privately, listen, and create a development plan. Some people respond very well to this kind of wake-up call.

How often should we run review cycles?

Annually is common. Some organizations do it bi-annually for high-potential or high-risk positions. Too frequent and nothing changes between cycles.

Can feedback be tied to compensation?

Yes, but be careful. It can reduce honesty if people worry feedback affects pay. Many companies use it for development only, separate from compensation decisions.

What if someone refuses to do it?

It's part of professional development. Explain its value. If they still refuse, that's worth a discussion about their openness to feedback.

Next Steps