Applied AI Engineer
Job Description
Location: Remote / Miami preferred
About Us
Atomic is the venture studio that co-founds companies by pairing founders with the best ideas, teams, and resources, and funding those with the most potential. When entrepreneurs co-found with Atomic, they team up with an experienced group of operators who have started dozens of companies and created billions of dollars in enterprise value.
Industry disruptors like Bungalow, Found, Hims and Hers, Homebound, OpenStore, and Replicant all started at Atomic along with dozens more. Atomic was founded in 2012 by serial entrepreneur Jack Abraham and has offices in NYC, Miami, and San Francisco with a distributed team across North America.
Overview
We’re a stealth-mode, AI-native startup reimagining how ecommerce brands connect with their customers across the entire commerce lifecycle, starting with revenue recovery. Our first product is a white-glove recovery tool that transforms customer drop-off into conversions, unlocking precision, personalization, and rich zero-party data that helps shape everything from marketing to product development.
We're backed by top-tier investors, operating with significant seed funding, and hiring our first wave of builders. We're now looking for an Applied AI Engineer to join our founding team and help architect and scale our agentic platform from 0-1. You’ll work at the heart of our AI stack, rapidly prototyping, evaluating, and deploying systems that sit at the heart of the customer experience.
If you thrive in high-autonomy, high-ambiguity environments and are excited to work at the cross section of research, infrastructure, and product to bring bleeding-edge AI to life, we’d love to meet with you!
The Role
As our Founding Applied AI Engineer, you’ll partner directly with the founders and core team to build, test, and ship LLM-driven agent workflows that power our customer experience. This is a hybrid role reporting to the CTO, where you’ll act as both an AI tinkerer and product builder–rapidly testing ideas, evaluating LLM behavior, and turning prototypes into production-ready systems.
What You’ll Do
Lead end-to-end development of LLM-powered features, from prompts and agents to sub-graphs, rerankers, and eval pipelines.
Architect and scale infrastructure for experimentation, fine-tuning, and rapid iteration across agent workflows.
Make strong decisions around context engineering, determining what information to include or exclude for optimal LLM performance.
Analyze large volumes of AI traces to identify behavioral patterns and use those insights to design and prioritize experiments, both for refining existing features and building net-new functionality
Define success metrics for agent performance and customer impact, aligning closely with business objectives.
Design and execute quantitative and qualitative eval frameworks that measure agent performance, especially where quality is hard to define numerically.
Partner directly with the CTO to shape technical strategy, make build-vs-buy decisions, and prioritize foundational AI investments.
Collaborate closely with product and engineering teams to scope, spec, and optimize tool interfaces and endpoints for LLM use across the broader system
Work with human experts to translate domain-specific knowledge and workflows into LLM-compatible experiences, extracting tacit knowledge and making it AI-leverageable
Stay on the frontier of LLM development, evaluating new research and running POCs to test promising techniques.
Who You Are
You know your way around the applied AI stack–prompting, embeddings, vector DBs, fine-tuning, and aren’t afraid to get your hands dirty.
You’ve built and shipped real LLM-powered features, whether it's agents, graphs, or custom assistants that live in production and understand the difference between a novel demo and a performant, reliable product
You have experience making principled decisions about data context and grounding, what information gets passed to LLMs, and why
You’re analytical and comfortable exploring trace data, finding weak spots in generations, and turning observations into hypotheses and experiments.
You understand that AI is only one part of a broader product system and enjoy working across disciplines to connect LLM behavior with backend systems, user actions, and business value
You’ve developed or applied meaningful evaluation methodologies, especially in environments where outcomes aren’t always easy to quantify
You move fast, work scrappy, and thrive in the chaos of an early-stage startup environment, especially when things are ambiguous and timelines are tight.
You chase new ideas, dive into papers, and turn research into working code that actually ships.
You care deeply about craft, clean UX, strong infrastructure, and AI that feels magical to users.
You’re an excellent communicator and collaborator who can partner with engineers, product managers, and domain experts to build thoughtful, integrated AI solutions
You want to work side-by-side with the CTO on everything from infra decisions to how we build and scale agentic systems.
Nice to haves
Experience with Langfuse, OpenAI tools, Hugging Face, Pinecone, or similar stacks.
Strong data science or statistics background.
Previous experience building LLM-powered features in production environments.
Exposure to ecommerce, CRM tooling, or sales/revenue tech.
Prior startup or founding experience.
Compensation
Competitive salary + meaningful early-stage equity
Open to different leveling depending on experience
We are focused on building a diverse and inclusive workforce. If you’re excited about this role, but do not meet 100% of the qualifications listed above, we encourage you to apply.
-----
Atomic is an Equal Opportunity Employer and considers applicants for employment without regard to race, color, religion, sex, orientation, national origin, age, disability, genetics or any other basis forbidden under federal, state, or local law.
Please review our CCPA policies here.
Company Information
Location: San Francisco, CA
Type: Hybrid