5 Emerging technologies our team utilizes to build better experiences

Executive Technology Director & Partner

The technologies powering our next generation of work

Smart Design’s studio has been buzzing with experimentation lately—most notably during our recent in-house hackathon, where teams pushed the boundaries of what’s possible with emerging tools like AWS Trainium and Small Language Models. That same spirit of hands-on exploration drives our day-to-day work with clients as we prototype faster, build smarter systems, and design more responsive experiences. Across our practice, five technologies in particular are helping us move from idea to impact with greater speed and fidelity than ever before.

01 Iterating faster with Unreal Engine

Often throughout the product development process, we find ourselves needing to prototype multiple configurations of a product. Building and testing each version can become not only time consuming, but cost prohibitive. Our solution? Bringing prototyping to the simulation space through Unreal Engine.

During a recent project, our team faced the challenge of placing cameras into a product that lacked a defined physical layout. Traditionally, to determine where the cameras needed to go, each layout would be constructed physically, cameras placed, and outputs tested. Now, with Unreal Engine, we can import the 3D CAD of each of our product’s configurations. The cameras in this virtual space can be modified indefinitely to achieve optimal results without having to build any custom physical hardware to support the adjustments. Taking advantage of Unreal Engine’s rendering system, we’re able to place the product in a true-to-life simulated environment to evaluate the cameras’ performances in a realistic scenario.

With Unreal Engine we’re able to simulate sensors, cameras, and physical hardware setups quickly and easily; allowing us to make small changes to each rendition with minimal engineering time.
Bryce Copenhaver
Senior Systems Engineer

02 Lighter, more malleable models

Smart Design recently hosted a hackathon where participants utilized AWS’s Trainium chip to create Small Language Models (SLMs), which aligns closely with our recent work. There has been a significant rise in client requests for custom hardware that requires AI at the edge, often in low-power environments where lightweight, efficient models are essential. The emerging advantages of SLMs are especially compelling for us, as they allow for smaller, task-specific models that can be trained to perform highly targeted functions. At the same time, we’ve been exploring new AI-driven UX experiences, particularly around content creation.

SLMs present an exciting opportunity to generate precise, focused components without relying on the breadth and complexity of full-scale LLMs.
John Anderson
Executive Technology Director & Partner

03 Building a real-time edge AI data layer with Redis

Real-time, on-the-edge computer vision systems struggle with the rapid flow of data moving between components that all need answers at the same moment. Without a unifying layer, our sensor outputs, inference results, and system states risk arriving out of sync, slowing the entire pipeline. Redis, a in-memory database, can provide a data layer that handles the ultra-fast reads and writes needed to move information instantly between modules. Its’ flexible data structures let us efficiently store and share everything from configuration settings to live results, while the publisher/subscriber model ensures that updates from one process are immediately available to others. This real-time communication keeps our pipelines synchronized as they generate, analyze, and interpret data in parallel.

By combining speed, reliability, and simplicity, Redis transforms a collection of independent processes into a coordinated, responsive edge AI platform.
Jehan Diaz
Systems Engineer

04 Automating annotation with the Segment Anything Model

Self-driving cars, retail checkout systems, and home security devices are just a few modern products powered by computer vision. While these models achieve incredible results, their progress often hits a major bottleneck: data annotation. Labeling training data is slow, repetitive, and expensive. At Smart Design, we faced these same challenges while developing computer vision models for an integrated home product. To combat this, we built our own automated annotation platform to make the process fast, affordable, and stress-free.

We have leveraged Meta’s open source SAM2 (Segment Anything Model 2) to dramatically accelerate the image annotation process for our computer vision datasets. Our Flask-based service allows annotators to simply draw a bounding box around an object of interest, then SAM2 automatically generates precise, pixel-level segmentation masks, eliminating the need for tedious manual polygon tracing. For video sequences, an annotator labels one frame to begin with, then SAM2’s video predictor tracks objects across multiple frames, propagating that annotation throughout the entire sequence. By running on a GPU, we achieve real-time performance that makes annotation 10-20x faster than traditional manual methods.

This automation allows our team to focus on quality control and edge cases rather than spending hours manually tracing object boundaries, significantly accelerating our computer vision model development cycle.
Henry Young
Data Scientist

05 Rapid prototyping with AI coding tools

The advent of Claude code, OpenAI’s codex, Cursor, and similar tools have allowed PMs like myself to quickly prototype abstract ideas beyond descriptions. This is helpful when conveying complex flows or actions within software. The time saved iterating over bullet point descriptions with developers, as previously done, is the truest form of “show, don’t tell.” Using these tools also encourages flow state and avoids the synchronous requirement of everyone, including clients, to jump on calls. A potential drawback can arise if a PM uses this method to “prove” something is feasible when in reality a prototype exists without constraints of a larger system.

These tools let us transform abstract ideas into working prototypes instantly, unlocking clearer conversations and faster decisions.
Tyler Sanborn
Technical Product Manager

Let’s design a smarter world together