Boostonomics: A Tale of Trials, Tribulations, and Triumphs

A Publisher’s Perspective of Medium’s Boost Program

Dinah Davis (She/Her)
Code Like A Girl

--

Created with Canva

In October, I was ecstatic to learn that the publication I run, Code Like A Girl, had been accepted into the prestigious Medium Boost Program. It marked a significant milestone for me as a publisher on Medium.

Despite the excitement, navigating the program’s criteria proved challenging, leading to a modest 22% acceptance rate in my first month. Undeterred, I kept submitting stories I thought were excellent and achieved a remarkable 50% acceptance rate in January, which I sustained through February.

Yet, as the saying goes, the only constant is change. March brought with it an unexpected turn of events. I didn't change my submission criteria or process, but the boost acceptance rate dropped below 50%.

Code Like A Girl Boost Acceptance Rate

I totally understand — you can’t boost them all, but the lack of feedback has been frustrating. It’s like being in a job where you’re doing your best, but you rarely get feedback from your superiors. It leaves you feeling adrift, unsure of how to improve and move forward.

The absence of detailed feedback left me feeling uncertain and directionless. Sure, I’d get those notification emails letting me know whether a story had been selected for boost, but they didn’t shed much light on the decision-making process. Without clear feedback on why specific stories were overlooked, it felt like stumbling in the dark, trying to find a way forward.

But here’s the thing — just as constructive feedback is crucial for our professional growth, getting actionable insights on rejected stories could be a game-changer for our writing process. It could help us fine-tune our approach and improve our chances of acceptance.

Screenshot by Author

I understand that the Medium curation team has many responsibilities. As an editor who spends all day working on the platform, I know it’s hard, if not impossible, for the curation team to give detailed feedback on every story. But wouldn’t it be great to hear tips on what’s working and what’s not?

The other day, I chatted with my friend Tracy Collins and shared these frustrations. She had a fantastic idea, drawing from our past experience working at an online education company. She said,

“Why not use a rubric, like they do in school, to help figure out which stories to boost?”

How did I forget about rubrics? It is such a simple yet elegant tool to drive empirical decisions! Making quick decisions about the products the hackers built when I judged in hackathons was so helpful. It’s a standard way for managers and companies to assess people’s readiness for promotion. A rubric can help me decide more clearly what I should nominate for boosting!

I decided to pilot the rubric with Code Like a Girl, as we often do at work when we want to influence our superiors. I aimed to determine whether it could be helpful for the Medium Curation team to share something similar with us. The rubric could

  1. Help the curation team make decisions consistently across the many reviewers.
  2. Help boost nominators make better decisions and submit fewer stories with a higher success rate. The curation team wins as they work less, and editors win as they work less and feel more successful.
  3. Help writers better understand what boost nominators look for when assessing their stories.

Sounds like a win-win-win, right?

I created a rubric for April to determine which stories should be boosted for Code Like A Girl.

The Rubric

The Criteria

Based on the Medium Boost Guidelines, I created five criteria for evaluating whether a story should be boosted.

Overall Impact and Longevity

This assesses if a story will have a lasting impact, contribute to ongoing conversations, and remain relevant.

Quality of Writing

Clear, concise, and well-structured writing is critical. Strong language command, grammar, and effective story communication capture readers' attention and keep them engaged.

Engagement and Readability

Good content captivates readers and sustains their interest throughout the reading experience.

Originality and Uniqueness

Originality is crucial for a story's success. Unique perspectives, fresh insights, and innovative ideas can make stories more compelling to readers.

Value to Readers

Valuable content is essential for a successful story. It should offer practical insights or information that readers can apply to their lives or work.

The Results

I used this rubric in April to determine whether to boost a story. To be nominated for boosting, a story must have a boost score of 11 or higher, be excellent overall, and be exceptional in at least one category.

The result?

Percentage of stories that were nominated and accepted for boosting per month.

Success! 63% of the stories nominated in April were accepted for boosting. That is a 21% improvement from March and 13% from January and February! Something must be working here! As a bonus, because my boost acceptance rate was over 60%, my quota of nominating 20 stories per month has been lifted for May. This means I can submit more than 20 stories for boosting in May.

Planning Ahead

Looking ahead to May, I plan to maintain the Boosting Rubric in its current form and assess incoming articles using its criteria. My objective remains steadfast: enhancing the quality of the submissions published with Code Like a Girl. I am dedicated to raising the bar for writing standards across all aspects of our rubric. While I won’t be completing a rubric for every author, I will provide editing feedback aligned with the rubric as a guiding framework.

Ultimately, our primary focus remains shifting the narratives surrounding women and non-binary individuals in the tech sphere. We achieve this by handpicking stories that deeply resonate with our audience, igniting thought-provoking discussions and fostering meaningful interactions within the Medium community.

If you’re curious about the data insights I’ve gathered this month, stick around for more. Otherwise, thanks for dropping by. If you enjoyed the story, a few claps would be appreciated.

Stats for Nerds

There’s something about diving into data analysis that feels like second nature — a bit like delving into the intricacies of software development. Uncovering insights, patterns, and trends is like solving coding problems. Data tells a story waiting to be decoded. I discovered some intriguing insights that could guide our improvement efforts.

Boost acceptance rate of stories by boost score.

Surprisingly, a higher boost score didn’t consistently correlate with a higher acceptance rate for stories. This pattern held between boost scores of 11, which boasted a 50% acceptance rate, and 12, with a 67% acceptance rate.

However, this trend didn’t persist for stories with a boost score of 13. Admittedly, I only rated two stories at this level, suggesting that the sample size might have been insufficient. Notably, I assessed only one story at a boost score of 15, and fortunately, it received a boost, indicating promising potential in our selection process.

I decided to look at the average value of each rubric criterion. Did the boost curators favour one category over another? The findings revealed that accepted stories garnered higher ratings in engagement, readability, originality, uniqueness, and overall impact and longevity. However, they received lower ratings regarding perceived value to readers than rejected stories. This suggests that the perceived value to readers may not be the primary determinant of acceptance. Although the score disparity between accepted and rejected articles in this category is insignificant, it underscores the need for further exploration and experimentation.

Another positive impact of using the Rubric is that it helped me to work harder with our writers on improving our stories. Our read rate for stories was exceptionally high this month, resulting in our highest average read time per story since I began tracking it. On average, readers spent 2393 minutes reading each story this month, which is a significant increase from October when the average read time was only 1010 minutes per story.

Ariel Meadow Stallings and the Medium Curation team would love to collaborate with you on improving the boost program. Do you already use an evaluation rubric, and if so, could you share it with us?

--

--

Founder of Code Like A Girl. I write about Women In Tech, Scaling Development Teams, Cyber Security, and my journey recovering from an eating disorder.