Neither virtual reality, nor augmented reality, nor mixed reality, nor “headsets”, nor “glasses” or “visor”. Apple is very clear that its Vision Pros are a space computing device. This is the term (or vision, never better said) that Apple has for the future of personal computing.
Now the question is: how do we define spatial computing? What will this allow us to do with the current Vision Pro and the models we see in the future? Let’s go through all the details to be clear about the concept with which Apple can change everything and avoid confusion in the face of something so new.
What does spatial computing mean?
From the beginnings of personal computing to today, to interact with electronic devices we always had to look at their screens. We could mention “prehistoric” exceptions like punched cards, but today we cannot conceive of a computer or a mobile phone without a screen. A two-dimensional surface in which to view the interface we interact with.
Apple wants to change this with what is called Spatial Computing. And no, it’s not like we’re going to use our Macs in space. We can define this spatial calculation as a mode of interaction with the graphical interfaces of a device which is carried out not on a screen, but in the environment around us. Use this space as a virtual office. This is why it is called “spatial computing”.
A device offering spatial computing has screens, but we will not examine these screens to work with it. For the first time, we will examine them. And this allows us to merge the real world with the graphical elements of the device. This is exactly what visionOS does.
What are the requirements for spatial computing?
Spatial computing means that an electronic device like the Vision Pro must meet special requirements. Until now, a device only had to generate graphics on a two-dimensional screen, nothing more. But a spatial computing device must generate its graphical interface in three dimensions, and in a way that can be used by “merging” it with the real elements around us. And this without involving dizziness or any type of discomfort for the user.
This requires several sensors and cameras that detect and draw a real-time map of the elements around us. This data must be captured very frequently and must be processed in real time.. If done well, we get the effect of believing that the interface elements are there, as if they were real, indistinguishable from the real world around us.
Spatial computing requires devices to add additional cameras, sensors and chips to sense the environment and process it in real time.
The Apple Vision Pro achieved this by combining two chips: an M2 (the same as that of the MacBook Air) and a new R1 chip which is responsible for processing all the data captured by the 12 cameras, 6 microphones and 5 sensors so that the sensation for the user is perfect. There The photon-to-photon latency achieved in the Vision Pro is 12 mswhich means that only 12 milliseconds pass between the time a real-world photon of light is captured by the Vision Pro’s sensors and its screens show it to our eyes.
And since we’re talking about screens, every space computing device should have two. One for each human eye. And this implies that the image generated by each screen must be interlaced and combined in real time so that the user has the sensation of interacting with the real world in three dimensions.
We must therefore talk about stereo megapixels.because every pixel we see through the Vision Pro is the result of the combination of two, with a total resolution of 6.5 megapixels stereo (23 million pixels in total) via two micro-OLED displays.
And if we use a mouse on traditional screens, what will we use in spatial computing? Well, in the specific case of Apple, we will use our own point of view: Wherever we look, that’s where we’ll aim right now. The only thing we will need to activate any element of the visionOS interface will be to look at it and touch the thumb and index finger of our hand.
How can we use spatial computing
This is perhaps the most important question, because right now the Vision Pros are starting to be seen on the streets and many people will wonder what possibilities we have with them. And in the case of the Vision Pro, there are not a few:
- For practical purposes, we are talking about a Mac that we place on our head. We’ll be able to use apps visionOS and compatible iPadOS applications, or even see the screen of a Mac as if it were another window in our three-dimensional environment.
- In the world of educationa spatial computing device can greatly facilitate the learning of risky professions such as surgical operations, work on risky architectural projects, climbing simulations or piloting an airplane…
- View content as series and films This will be one of the most common activities, since you will eliminate the limit of inches that you can have on your living room television. Additionally, Disney+ has already demonstrated the possibilities that sporting events offer, showing a match from different angles/cameras and with additional data updated in real time. Apple has also ensured that the three-dimensional photographs we can already take with the iPhone 15 Pro can look good in visionOS.
- And of course, there are many possibilities with the Games. We will move from playing on a square screen to being able to merge games with our reality, or even see ourselves immersed in a completely virtual world. The Vision Pro is compatible with the controls of the main consoles in the living room, so Apple’s work is already done.
What we can expect for the future of spatial computing
For now, the Apple Vision Pro is the first step in this new world that is Apple. This is a first device, and over time we will see the arrival of future generations. There are already rumors about a cheaper “Apple Vision”, and it is believed that in the future we will have a range of varied models that will adapt to the needs and pockets of all users.
Inevitably, we will also see moves from the competition, who will seek to enter this new spatial computing market and compete with the Vision Pro with their own devices. “Samsung Galaxy Vision”? Who knows. For now, this seems like the start of a whole new world.
In Applesfera | Apple’s spatial computing is now a reality: Apple Vision Pro launches from New York
In Applesfera | “I knew we would get there for years”: Tim Cook looks to the future with Vision Pro and has a very clear roadmap
Table of Contents