Open-sourcing Sloth Finder to search Ruby Weekly and side projects warnings

Open-sourcing Sloth Finder to search Ruby Weekly and side projects warnings
Rich Sloth Finder.

Hey folks, what have you been up to? One of my 100-minute hack detours just became a 20-hour grind, so I wanted to share it with you and warn you below.

Basically, I scraped Ruby Weekly to get some insights on my favorite Ruby topics and ended up hosting and open-sourcing it for no substantial reason. It was probably more just for fun and testing some boundaries, like “Can I host a Rails app for cheap on SQLite and a managed platform powered by a slow brute force search?”.

Now that I am writing this and trying to remember how I got here, I remember that I was doing a PoC for a friend who might soon be on a mission to scrape the Internet’s manufacturers’ data. 🦄

I had some fun setting up a scraping architecture, pretending that my scraper is a human on a less scraping-friendly (but not wholly unfriendly) website, downloading some gigs of top-notch quality manufacturer data and scaffolding a scraping insights app for him.

Once done and deployed, I wanted to know if there were any articles about scraping in Ruby Weekly. And about integrations. And APIs. And webhooks. But Ruby Weekly does not let you search its archives natively, so let’s scrape it while at it!

This is how one side project detour leads to another, leaving you with a bunch of rabbit hole trips on the way. So, one “a few hours PoC” detour became a massive Scraping Insights 12-hour project, which became a 100-minute Ruby Weekly scraping hack, which became a 20-hour open-source project. Dissecting how this little app became a 20-hour project is a whole other story and its own project so that I won’t get into it now.

Twenty hours might sound like a weekend's work for you young bucks out here. However, with my set of responsibilities, apart from building the One-in-All Marketing API at ClickFunnels on workdays, this might be a side project stretching over a whole month. Looking it up now, actually, it almost did take a month:

Exported from my space.

Other than that, it’s probably time to move my ass back to my core mission of solving the API puzzle and helping Ruby companies spend less time and energy on their integrations and public APIs so they can focus on the core of their fantastic app. I am sketching some Ruby and Rails gems for API design patterns, integration patterns, and OpenAPI schema generation ATM 👀 All needed stuff that I wrote at least a few times “manually” working on different public APIs. There should be some gems for that!

I guess the overall learning, or actually, the reminder for myself and you, dear friend, is that it’s generally essential to keep track of time, be aware that it’s rarely “that quick thing”, and detouring from your primary mission will have some cost long term. But it’s also not a big deal if you get back on track, and there are many cool things you might learn in those 20 hours of side quests.

Back to the primary mission, though, I will also be working on a Voting API, something I want to use for the gems above, but also for my next talk about “What We Can Learn About Breaking Changes From Years of Shipping Public APIs”. Note that this is not the actual talk title but just a way to communicate what the talk is about in one half-done sentence. The exact working title is “Client Days Ruined - A Metric To Help You Stop Introducing Breaking Changes”. I plan to make the talk somewhat interactive, with the attendees voting using the API via the browser’s incredible GET requests capabilities, whether the thing I just shared is a breaking change. There will be a twist where the attendees will witness a breaking change on the last vote live. I hope there is not a breaking change in the speaker, but a constructed one in the API response. That’ll be fun, and I’m pretty confident that I’ll get through some CFP with this topic and setup, maybe this or next year at some of the numerous Ruby and Rails confs 😅

And it’s probably also time to create a master plan for how to read a bunch of historical and current Ruby Weekly API and integrations articles. The Sloth Finder is calling. 😅

Not amused in da sink. "Side projects shoulda be smootha".

Chapeau! 🎉

Sloth Finder

Sloth Finder helps you encounter the most excellent weekly Ruby articles around your favorite Ruby and Rails topics for the past decade sourced from Ruby Weekly. This tool was made because the creator, a Sloth in human form, was interested in all the most significant articles around his favorite weird Ruby niche, so he built a primitive search and looked for:

api, openapi, automation, rest, graphql, rpc, soap, webhook, scrap, event-driven, serializ

What followed was a wealth of articles full of knowledge, understanding of his niche, and potential for connecting to potential leads and partners. Instead of calling it a day, he decided to share the tool with us, the people:

He has also open-sourced it on GitHub. Here are some reasons for you to look at the code:

  • You want to see how to deploy a Rails project on the cheapest non-free plan running on SQLite (making it cheaper than usual). After I had already embarked on, I found that someone else documented how they experimented with something similar and ran Python on SQLite on for basically free, so this might be another solution if you are interested in hosting your hobby projects.
  • You want a simple Ruby scraper template to scrape simply structured sites like,, etc.
  • You wanna see a logger that writes to STDOUT and logs (useful for local scraping if you want to watch progress but also write to logs for search purposes later).
  • You want to see a Turbo stream in action.
  • You want to see how a „first thing that came to mind search over a small ActiveRecord collection“ was implemented.

Ok, I'm writing in the first person again. If you find the search useful, let me know if you would like something similar for the other publishings of cooperpress. They have top-notch stuff for JavaScript, Postgres, and much more! I’ll also take suggestions for other scrapeable publishings that don’t have a search. Or just grab the code and replace the BASE_URL of the SimpleScraper. Then you’ll have that stuff locally.

Enjoy 🦥

Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to RichStone Input Output.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.