Apple unveiled two significant AI features at its iPhone 17 launch event that have the potential to enhance everyday user experiences. The first is the smart selfie camera on the iPhone 17. It uses a square sensor that doubles the resolution to 24 megapixels and outputs crisp 18MP images in both vertical and horizontal orientations.
Users can switch between vertical and horizontal formats without physically turning the phone by tapping the rotate button. The AI component allows users to enable “Auto Zoom” and “Auto Rotate” settings. The camera automatically detects faces, adjusts shot width, and decides the optimal orientation.
This feature, dubbed “Center Stage,” ensures everyone in the frame is captured perfectly. It worked seamlessly during a hands-on experience at Apple Park.
Smart camera and translation features
Apple included this feature in all iPhone 17 models, from the standard version to the iPhone Air. The second AI marvel is live translation in the new AirPods Pro 3. While Apple’s Translate app has lagged behind Google Translate, the implementation of live translation in AirPods Pro 3 aims to be a game-changer.
During a demo with fellow journalists, a Spanish speaker communicated directly while listeners heard the translation through their AirPods. The translation was immediate, smooth, and accurate, making real-time communication seamless for users who may not share a common language. Although this feature currently supports fewer languages than Google, it is more refined and user-friendly.
It showcases Apple’s ability to leverage generative AI effectively within its ecosystem. These AI-driven advancements, subtly yet powerfully integrated into Apple’s product line, underscore the company’s commitment to improving user experience. The smart selfie camera and live translation feature exemplify practical AI applications set to become indispensable in daily tech interactions.
