Noah here. A few years ago, Jann Schwarz wrote a WITI about omotenashi, a Japanese approach to anticipating someone’s needs and providing a level of service that goes beyond expectations—for example, the Park Hyatt Tokyo remembering how he’d left his hotel room at the end of his first stay, and quietly recreating it when he checked back in for his second. “Like many things Japanese,” he wrote, “it is not easily defined or translated but evolves around bringing your full, authentic self to serve a guest and doing so in a humble, non-ostentatious way. It is about a lack of pretense and showing no expectation of reciprocity.”
Since then, I’ve run into the term all over the place, most recently in a Nikkei article titled “Japan's 'omotenashi' culture can offer an edge in the AI age.” The article’s argument is that while other companies, particularly in the hospitality space, are focused on finding transactional moments to integrate AI, omotenashi is a uniquely human approach that AI won’t be able to replicate quite so easily.
Why is this interesting?
I have largely tried to avoid AI talk around here. It’s what I’ve been spending most of my professional time with lately, but I find the noise so overwhelming that the bar of interestingness becomes much higher than with other topics. If you’ve spent ten minutes on LinkedIn or Twitter in the last twelve months, you’ve surely been inundated with AI talk, maybe even a never-ending stream of advice about prompting—complete with PDFs and cheat sheets of the Ten Prompts You Should Never Leave Home Without.™
The truth is actually a lot simpler, though. When it comes to AI prompting, for instance, understanding the general approaches—zero-shot, few-shot, chain-of-thought, etc.—is a lot more powerful than any specifically worded prompt. To quickly explain: zero-shot is when you just ask a question, few-shot is when you offer examples for the large language model (LLM) to mimic in your prompt, and chain-of-thought is when you actually describe your problem-solving process (your “chain-of-thought”) within your prompt.
That last technique came out of Google Research in 2022 and was found to return much better results than other techniques. Here’s an image from that paper that helps illustrate the technique:
Last month, a team at Microsoft built on the success of chain-of-thought, showing how better prompting techniques can actually help general models perform as if they were specially tuned for a specific discipline (in Microsoft Research’s case, medicine).
What does all this have to do with omotenashi? Reading these papers recently, I’ve been noodling on whether a key part of getting great results out of these systems will actually be the self-awareness and process understanding required to write great chain-of-thought-like prompts. It seems to me that what comes with an omotenashi approach is a deeply considered process to service that might actually lead to better results than the more transactional approaches the technology is being considered for. (NRB)
—
Thanks for reading,
Noah (NRB) & Colin (CJN)
—
Why is this interesting? is a daily email from Noah Brier & Colin Nagy (and friends!) about interesting things. If you’ve enjoyed this edition, please consider forwarding it to a friend. If you’re reading it for the first time, consider subscribing.