The hidden costs of pre-computing data | Chalk's Elliot Marx

09 Dec 2025 • 40 min • EN
40 min
00:00
40:49
No file found

Is your engineering team wasting budget and sacrificing latency by pre-computing data that most users never see? Chalk co-founder Elliot Marx joins Andrew Zigler to explain why the future of AI relies on real-time pipelines rather than traditional storage. They dive into solving compute challenges for major fintechs, the value of incrementalism, Elliot’s thoughts on and why strong fundamental problem-solving skills still beat specific language expertise in the age of AI assistants. Join our AI Productivity roundtable: 2026 Benchmarks Insights *This episode was recorded live at the Engineering Leadership Conference. Follow the show:Subscribe to our Substack Follow us on LinkedInSubscribe to our YouTube ChannelLeave us a Review Follow the hosts:Follow AndrewFollow BenFollow Dan Follow today"s guest(s):Elliot Marx: LinkedIn Chalk: Website | Twitter/X | Careers OFFERS Start Free Trial: Get started with LinearB"s AI productivity platform for free. Book a Demo: Learn how you can ship faster, improve DevEx, and lead with confidence in the AI era. LEARN ABOUT LINEARB AI Code Reviews: Automate reviews to catch bugs, security risks, and performance issues before they hit production. AI & Productivity Insights: Go beyond DORA with AI-powered recommendations and dashboards to measure and improve performance. AI-Powered Workflow Automations: Use AI-generated PR descriptions, smart routing, and other automations to reduce developer toil. MCP Server: Interact with your engineering data using natural language to build custom reports and get answers on the fly.

From "Dev Interrupted"

Listen on your iPhone

Download our iOS app and listen to interviews anywhere. Enjoy all of the listener functions in one slick package. Why not give it a try?

App Store Logo
application screenshot

Popular categories