Archer, P.I. is an Augmented Reality app designed to interface directly with the TV show, allowing the player to literally grab an item from an episode and interact with it while immersed in rich secondary narrative that compliments developments in the show. The app seamlessly weaves animated video storylines with graphical hints while leveraging game mechanics to surprise and delight fans as they solve a new case for each episode of the show.
I developed a range of interactive AR gameplay props and puzzles by configuring image targets in our database, ensuring each physical item was properly recognized by the player's mobile camera and added to their in-game inventory. Tapping an item in the inventory would transition the player to a dedicated scene, where they could closely inspect the object and interact with additional clues.
The lock system included:
After a few correct attempts (or enough failures) the puzzle was scripted to break. The dial would pop off, revealing a hidden compartment and delivering the next clue in the narrative sequence.
I worked on the permission system between our game engine - Unity, the AR middleware - Vuforia, and operating systems - Android and iOS.
This was a bit of a challenge because we needed to request camera permissions from our users before the engine initialized, but depending on the operating system and its specific version, the user may not get a permission prompt until the app is launched. The problem was magnified, particularly on Android devices, due to market fragmentation in 2017.
Each version of the Android operating system had a different philosophy on how to manage permissions and privacy settings. For example, some would allow the use of a camera only when the app was open, some expected 24/7 access to the permission, and others could be on a case by case basis.
I decided to create a native Android activity in Java to request permissions from the user in the manner appropriate for their OS version, as well as user preference in more modern versions of Android. This activity then launched our engine, initialized our middleware, loaded our assets, and then brought the user to the title screen of our app.
On the iOS side, device fragmentation and permission standards were more consistent - making it much easier to manage, but we still needed to make sure permissions remained active while our systems initialization was in process. I used a similar solution to develop a native iOS app for phones and tablets using Objective-C and integrated the OSX build process into our team's existing deployment pipeline for convenient iterative builds on each platform. This allowed us to keep our testing team up-to-date with new gameplay features, performance metrics, asset quality checks, and permission verification across all supported devices.