top of page
Search

Resistor Reader

  • Writer: Daniel Louie
    Daniel Louie
  • Mar 4, 2024
  • 2 min read

Updated: Oct 22, 2024

I participated in Apple's Swift Student Challenge, an open-ended coding project competition to encourage students demonstrate creativity using tools provided by Xcode and Swift Playgrounds. This was a great opportunity for me to get back into iOS development and start building an app idea I had been incubating for a while!


I am color deficient, along with 8% of the global male population. I’m also an engineering student and recently experienced frustration with electronics that are color-dependent so I aim to create a solution to the difficulties of life with vision impairment, starting with this specific challenge. While there are apps on the App Store that translate user-given colors to resistance, none actually solve the challenge of color deficiency which is color identification!


My app, Resistor Reader, can read and translate to resistance from an image. I learned SwiftUI for the app and integrated frameworks such as: AVFoundation for capturing images, CoreImage for the camera preview and Photo Kit for uploading and processing images. The user uploads an image and uses the color pickers to locate the color bands. Each color sample has a text description of the depicted color, a crucial feature for color-deficient people that’s not often provided by apps. After identifying the color bands, the app determines the resistance value. Through this project, I discovered Apple's Human Interface Guidelines which has meticulous documentation on best practices for User Experience and accessibility of front-end development. I was fascinated by the intentional design choices and attention to detail that I plan to integrate into my future versions.



I've ran into multiple challenges on this project so far. For starters, it is difficult to independently re-learn a programming language and build an app while balanced a full college course load :). I also had to learn the nuances of <Binding> and @State variables that were the culprits of some frustrating bugs.


While the short duration of Apple's challenge limited the functionality for my project submission, I definitely plan to continue to improve my app as it provides a useful tool that isn't currently available on the App Store.


I have many future plans that I wasn’t able to implement into this version. This includes live image tracking using Vision/CoreML to automatically detect and translate resistors. I plan to use my past experience with Roboflow for developing my own training data and using Apple's CoreML to create a custom Machine Learning model for tracking and identification.

My long term goal is to support VisionOS for a seamless experience working with electronics, and expanding to broader support for vision impairment.


Thanks for reading and stay tuned for project updates!

 
 
 

Comments


© 2024 by Daniel Louie.

bottom of page