The Mirrorworld Edition

On the future, reality, and reflection

Brady Moore (BM) is a longtime friend of WITI and a former Green Beret officer. Today he’s Director of Mission Support at Cesium in Philadelphia. He’s written extensively about how to translate the planning process that he’s used in harsh/unforgiving environments into the business/startup realms. Today, he takes on the mirrorworld. It is a longer piece, but knowing our audience, you’re going to enjoy. -Colin (CJN)

Brady here. In 2018, futurist Kevin Kelly wrote an article in Wired magazine that described how advances in augmented reality would create a virtual world that looked and behaved exactly like our own. 

Using the term mirrorworld, first popularized in 1993 by Yale computer scientist David Gelernter, Kelly said it will “reflect not just what something looks like but its context, meaning, and function. We will interact with it, manipulate it, and experience it like we do the real world.” Using the “digital twin” concept first attempted by NASA in the 1960s, where engineers kept a duplicate of every component they sent into space so they could troubleshoot it back on earth, Kelly said everything would have a digital twin. 

For this to happen, Kelly explained that we’d need to build a 3D model of physical reality in which to place those twins—which would necessarily take into account the additional dimension of time. You’d be able to see what something looked like 2 years ago, and what it’s expected to look like 2 years from now. “These scroll-forward scenarios will have the heft of reality because they will be derived from a full-scale present world. In this way, the mirrorworld may be best referred to as a 4D world.”

Kelly imagined it would take at least a decade for mirrorworld to develop enough to be used by many and reflect our entire planet. 

Why is this interesting?

One of the things that hold people back from creating and using "digital twins" of their processes, products, or environments is the fact that the data come in so many different formats—and the files can be really big and hard to get to where anyone can see and use them. Think of all the interconnected and overlaid systems that exist in a city—roads, electricity, sewage, and so on—and you can get an idea of how many different levels of complexity there can be. Now imagine that each of those systems has its own totally unique way of describing how it works. Getting the full picture of how a city functions would require being able to ingest and translate each unique format.

Since early 2020, I’ve been working as Mission Support Director at Cesium, a startup in Philadelphia that builds and delivers a growing, open platform for 3D geospatial visualization. 3D geospatial is a tough problem today because while LIDAR and drone cameras can piece together 3D models of things or places, it can be a long manual process to get those models placed on the map accurately and streamed where they need to go. The platform automates the process of combining models and delivering them. In essence, we are trying to help everyone operating in meatspace describe how their systems function using the same language. The platform solves that by converting the formats to an open spec and streaming them (dealing with the problem of file size). It simplifies complexity so that data can be easily accessed and used where it needs to be accessed and used.

The technology began as an open-source project in 2011 that created CesiumJS, an engine for visualizing the world—and the objects within it—with extreme precision in time-aware 3D. If you work in mapping, geospatial intelligence, drone flight management, modeling & simulation, space domain awareness, large scale construction, or commercial real estate, there’s a good chance you’re using something based in CesiumJS right now, or have in the past, to visualize and analyze your work. Though its original purpose was to track satellites in orbit over time, once sensors on cars and drones started gathering data in 3D at a rapid rate, by 2015 the Cesium team created an open standard called 3D Tiles that provides a unique way of streaming geospatial data so that it can be quickly and easily rendered on any device. To serve the growing user base, the team launched Cesium ion as a commercial subscription platform to take data in dozens of formats and host, optimize, and stream it as a service to any device—just like YouTube optimizes raw video for streaming. And since it seems everyone’s got some data but maybe not terrain elevation or the world’s buildings, we curate a global terrain dataset and all the world’s crowdsourced buildings from OpenStreetMap and make them available as well. 

So how close are we to mirrorworld? There’s still a long way to go in order for it to be a real “world”—to mirror reality it has to reflect absolutely everything in our real world. But the most recent step toward it came earlier this month. Cesium released an open-source plugin, made through a MegaGrant from Epic Games, that allows developers in Unreal Engine to build games and simulations that span a highly accurate representation of the entire planet earth, complete with physics and the ability to zoom in and out with true gaming performance. And because of it, soon enough you’ll be seeing games and training simulations that feature an environment mapped from the real world.

What it’s really going to change in the short term is who’s empowered to make the simulations of the future. In the national security sector, simulations are a $12B market that’s dominated by a small handful of massive global companies with proprietary software that takes years to develop. Through open-source innovations like this one, soon enough nearly anyone will have the opportunity to build simulations that rival the multi-million dollar, hard-to-get legacy systems people use today, and they’ll be able to do so faster and in greater numbers. It will mean that the entire industry will develop more realistic and more useful environments, for more people and for more purposes. Early examples are likely to focus on reducing risks, such as virtual training environments for heavy equipment operators and people in dangerous jobs, and simplifying layers of complexity, as we see with the living digital twins of cities that allow planners and developers to monitor and analyze interconnected systems together. The bet is that through this evolution of the market, once-niche simulations will become a more common part of our lives, and over time you’ll start to see more and more digital representations of the things you use and see every day. With this baseline, it feels like the foundation is already under our feet.

This is a simulation. Mount Fuji at sunrise in Cesium for Unreal. 

Digital twin of the Greek Catholic Church of the Protection of the Mother of God in Stare Oleszyce, Poland. Created from 1325 images (NIKON DSLR plus DJI Phantom 4 PRO) in Reality Capture. Visualized in Cesium for Unreal.

Quick Links:

  • In the spirit of Cesium’s open-source heritage, we’ve made the plugin open source and available on Unreal Marketplace. Anyone can get the plugin for free and use it to build games and simulations like digital twins. (BJM)

  • Over 350 million 3D buildings globally - crowdsourced through OpenStreetMaps. Check them - and their metadata - out here (BJM)

  • Try out the open platform 3D geospatial for yourself with a free Cesium ion community account (BJM)

Thanks for reading,

Noah (NRB) & Colin (CJN) & Brady (BJM)

Why is this interesting? is a daily email from Noah Brier & Colin Nagy (and friends!) about interesting things. If you’ve enjoyed this edition, please consider forwarding it to a friend. If you’re reading it for the first time, consider subscribing (it’s free!).