Noah here. In a recent BrXnd Dispatch (my semi-regular newsletter about AI and marketing), I had a little aside about building a project router for Linear, my task management tool. Basically, it takes in a new task and figures out which project it should belong to. I tried to explain why this kind of stuff is, in my view, the perfect use of AI:
But mostly, this kind of stuff is too perfect for AI. While many companies are adding various “magic” creative tasks to their AI (like this weird new Google Docs writer thing), these basic classification tasks are a much more fruitful and simple place to apply the technology. This is something you shouldn’t have to think about, and now, mostly, I don’t have to anymore.
Why is this interesting?
As I was thinking more about it and talking to a few friends, I started to try to come up with a broader list of stuff this technology is particularly excellent at. I don’t mean the stuff where it’s just good or cool, but the places where it is an order of magnitude better than what we’ve had before—the tasks where I’ve just thought, “Huh, well, I guess that’s now solved.”
One of the first versions of this for me was web scraping. I have spent a lot of time building various web scrapers over the years. It’s not a hard problem, it’s just a super annoying one. That’s because, in the past, when you built a scraper, you had to design it for the specific HTML you were scraping. That meant you had to know it was using an <h1> tag as the title and <p> for the body, for instance. But most pages don’t follow the same formats, so you needed to customize your scraper for whatever you were doing. And then, as soon as the page structure changed, you needed a new scraper.
The first time I built a scraper using GPT-3, I realized that all I needed was the text, not the structure, and it could easily make sense of it and return it to me in whatever format I needed. This was amazing and continues to be something I utilize in projects nearly daily. It’s not just that it can classify all this data. It’s that it’s flexible enough to deal with any changes that might arise.
As I was thinking more about the scraping use case and trying to come up with others, I started to realize that fundamentally scraping, at least in the way I’m doing it, is no different than my project assignment task: I’m asking the AI to classify some unstructured data into a structured format. In the case of the scraper, I’m asking it to classify multiple things at once (the title, the body, and some other info), but it’s still fundamentally about classification.
And as I tried to come up with more of these fundamentally game-changing use cases—the things where I can’t imagine a human should ever do that work again—it kind of kept coming back to this. Sure, there is non-classification stuff it’s great at, like summarization, but a place where it continually shines is in figuring out which bucket some arbitrary text should go.
The implication of this, at least for me and the companies I’ve been spending time with, is that the focus should be more on the tedious tasks than the so-called interesting ones. The amount of work within an organization where some person has to file something in the right place—like a campaign getting the correct campaign ID for reporting—is huge. And while so much of the focus is on image or text generation, this is the kind of low-hanging fruit that companies can take advantage of right now without much concern for the legal questions that still surround LLMs. (NRB)
—
Thanks for reading,
Noah (NRB) & Colin (CJN)
—
Why is this interesting? is a daily email from Noah Brier & Colin Nagy (and friends!) about interesting things. If you’ve enjoyed this edition, please consider forwarding it to a friend. If you’re reading it for the first time, consider subscribing.