One of the great novelties of this WWDC23 was undoubtedly the presentation of the Apple Vision Pro, mixed reality glasses that allow the user to mix reality and fiction to make experiences possible that we previously thought were unthinkable.
If you have ever seen his presentation, you will have understood that there is something that differentiates Apple’s proposal from the rest of the options on the market. Well, there are several, but the one that is most noticeable to the naked eye is the lack of controls. This product is controlled by gestures
Controllers are overrated
Much of how the Vision Pro Vision works it’s in our hands. With them we can open applications, change the size of a screen, zoom, move objects, rotate them, etc. The system works quite well according to those who have tried it, and in my opinion that approach is the most appropriate.
Several gestures can be performed to interact with the system. There will be a learning curve depending on who uses the Vision Pro, but it’s quite intuitive
- Double touch: This initiates a double-click gesture like on iPhone or Mac.
- p inch and hold
- pinch and drag: used to scroll and move windows. You can scroll horizontally or vertically, and if you move your hand faster, you can scroll faster.
- Zoom: is done by joining the fingers and spreading the hands to zoom in on the image. Window size can also be adjusted by dragging the corners.
- Turn: it will join the fingers and rotate the hands, and it will be used to manipulate virtual objects.
The truth is that only the description of the gestures seems quite intuitive. It’s something that seems easy to remember. It’s quite an innovation. Those in Cupertino know this is the future, and that’s why they came up with such a revolutionary systemand according to those who have already tried it, it works quite well.
In Applesphere | iOS 17 is now official: more personalization in your contacts and on the lock screen