Tommy the Turtle: Color Splash is designed to teach neurodivergent K-5 children shapes, colors, spelling, handwriting, and math concepts. I worked as the team's sole programmer to build the product from a blank canvas to a polished vertical slice to be expanded on by a future content team.
In the early phases of the project, I wanted to add a bit of polish to systems in preparation for a product demo. I wanted to grab the player's attention and keep it so I began to add audio cues, particles, and subtle UI animations for interactable elements to each of our 8 learning topics. In our very first playtest, we found that nobody was completing the spelling challenge. We saw that children were picking up letters and immediately putting them back down, on repeat, for multiple minutes at a time.
After some observation, I quickly understood the root of why things weren't working. I had misunderstood what the reward even was. I thought it was the excitement of completing the challenge and being awarded a completion badge on the main 'Color World' screen, but the kids weren't focused on that - instead, they were focused on the immediate reward: exciting shapes and sounds. I inadvertently frontloaded the reward and left no reason to complete the learning curriculum.
Learning this early was crucial, becuase it allowed me to revisit how I was handling rewards, engagement, and polish for this particular project. I made sure to defer effects and limit what elements could be interacted with, and more importantly, when they could be interacted with. The spelling puzzle had the blip and boop audio removed and was replaced with sounding out the letters as the player picked them up. In our next playtest, we A/B tested the spelling game without audio when picking up a letter with one that said the letter out loud when it was picked up. Surprisingly, we found that that the version that sounded out letters was not overly distracting like the delicate popping sounds from our previous test, and in some cases even improved focus over the audioless version.
There were 2 skills that required our players to draw using a touch screen. Both the handwriting and tracing puzzles used the same algorithm to determine pass/failure conditions. I'll focus mainly on the handwriting skill since this has a few added parts, like additional strokes and letters.
To start I created a prefab representing each letter in the English alphabet.
Let's look at the letter 'R', consisiting of 3 strokes.
Notice that stroke 1 and 3 are straight lines, while stroke 2 is curved. In order to address curvature in a single stroke, nodes were introduced to be used as waypoints for tracing. Each stroke must consist of at least 2 nodes, and these nodes are circular colliders responsible for indicating which line segment the player is currently working on.
In order to represent accurate handwriting I created a series of dynamic Unity LineRenderers to represent each individual stroke. Below is a sample code snippet for the initial strokes.
To create an accurate representation of the user's handwriting, I had to first convert 2D touch input from screen space into world space, in order to check against colliders. If the user stayed within the threshold for each waypoint in a stroke, then a new vertex would be added to the LineRenderer component.
The final version included visual feedback for players to understand where they went wrong in their handwriting, as seen below in the U in JUICE.