• Brain Bytes
  • Posts
  • 🤖 DeepSeek Just Shook Silicon Valley—And It Cost 68x Less to Build

🤖 DeepSeek Just Shook Silicon Valley—And It Cost 68x Less to Build

PLUS: How to try China's breakthrough AI model (even if you're in the US).

Hey everyone, here's your new AI update for this week!

In today's issue:

  • Why DeepSeek's $6M AI model matching ChatGPT sparked a $1 trillion market panic
  • Anthropic launched Claude for Healthcare that can read your medical records across multiple providers
  • CES 2026 proved "physical AI" is real—robots, autonomous cars, and AI-powered hardware everywhere
  • And more...

💰 1. DeepSeek's $6M AI Model Just Triggered a $1 Trillion Market Sell-Off

Chinese AI startup DeepSeek released its R1 reasoning model this week, claiming it matches OpenAI's capabilities while costing just $6 million to develop—68 times cheaper than comparable models.

Why This Matters

  • DeepSeek R1 outperforms ChatGPT’s o1 model on math, coding, and logic benchmarks despite the massive cost difference
  • The model is fully open-source and free to use, with API costs 96% lower than OpenAI’s pricing
  • When released in January 2025, it triggered a $1 trillion sell-off across global AI stocks as investors questioned Western AI’s cost advantage

The Reality Check

Microsoft reports DeepSeek is gaining massive traction in developing nations—89% market share in China, 56% in Belarus, 43% in Russia. But adoption remains low in North America and Europe due to security concerns and government restrictions.

DeepSeek V4 is reportedly launching mid-February with elite coding performance that could surpass Claude and ChatGPT on long-context code tasks.

The Practical Impact

What you can actually do:

  • Use DeepSeek R1 for free at deepseek.com — it excels at math problems, coding challenges, and logical reasoning
  • Check its “thought process” feature to see exactly how it arrived at conclusions (like OpenAI’s o1)
  • Run it locally for complete privacy using the open-source implementation

However, DeepSeek operates differently on political topics due to China’s internet restrictions, and some countries have banned it over security risks.

Bottom line: When a $6 million model matches $400 million models in performance, the AI industry’s economics fundamentally shift. The question isn’t whether cheaper, open alternatives will disrupt the market — it’s how fast Western AI labs can adapt their pricing and approach.

🧾 2. Anthropic Launched Claude for Healthcare—Now AI Can Read Your Medical Records

Anthropic just unveiled Claude for Healthcare with a partnership allowing patients to connect their electronic medical records to Claude and ask health questions based on their actual history.

Why This Matters

  • Claude can now access your unified medical records across 50,000+ healthcare providers through HealthEx integration
  • Uses Model Context Protocol (MCP) to securely retrieve only relevant portions of your records for each specific question
  • Launched alongside OpenAI’s ChatGPT for Health—the AI labs are racing to capture the lucrative healthcare market

The Reality Check

Available now to Claude Pro and Max subscribers in the US only. Your medical data is NOT used for model training and doesn’t get stored in Claude’s memory. You control access and can disconnect anytime.

Anthropic donated MCP to the Linux Foundation’s new Agentic AI Foundation this week, joining OpenAI and Block as founding members—signaling industry-wide commitment to standardizing how AI connects to data.

The Practical Impact

Real use cases right now:

  • "What does this lab result mean?" – Get plain-English explanations of medical terminology
  • "How has my cholesterol changed over time?" – Track health metrics across years and providers
  • "What should I ask my doctor about this?" – Prepare for appointments with context-aware questions

Setup takes minutes: verify identity with biometrics and government ID, connect patient portal logins, choose what Claude can access.

Bottom line: AI accessing your medical records sounds alarming, but the ability to ask “What medications am I taking?” and get instant answers from scattered records across multiple doctors is genuinely useful. The privacy controls are solid—just know you’re trading convenience for giving an AI company (temporary) access to your health data.

🤖 3. CES 2026 Proved "Physical AI" Is Finally Real

This week's Consumer Electronics Show in Las Vegas marked a major shift: AI moved from digital interfaces into physical hardware, robots, and autonomous systems.

Why This Matters

  • Nvidia launched Vera Rubin architecture with 10x throughput improvements and 10x reduction in AI token costs
  • Boston Dynamics' humanoid Atlas robot is coming to Hyundai factories in 2028 for mass production
  • Ford announced an AI assistant that uses cameras to calculate exactly how many bags of mulch fit in your truck bed

The Reality Check

Every major chipmaker (Nvidia, AMD, Intel, Qualcomm) unveiled robotics-focused processors at CES. The event featured cowboy hat-wearing humanoid bots, surgical robots, and helper bots checking people in at events.

Nvidia’s Alpamayo self-driving model is designed to help cars understand unique situations like children chasing balls into streets—moving beyond simple object detection.

The Practical Impact

What’s actually shipping soon:

  • AI-powered robot vacuums that climb stairs and pick up objects (already available)
  • Ford’s mobile app AI assistant analyzing cargo space (launching 2026, in-car version 2027)
  • Smart home devices with offline voice control that never send data to the cloud

CES Innovation Awards went to VIXallcam (all-weather vision for trucks) and AA-2 (indoor delivery robot that navigates elevators autonomously).

Bottom line: Physical AI means intelligence embedded in hardware that moves, sees, and acts in the real world. When robots can autonomously navigate elevators and AI can tell you exactly how much cargo fits in your truck by looking at a photo, we've crossed from demos into deployable products. 2026 is when AI stops being just software.

🧠 3 Advanced Ways to Use AI to Actually Work Smarter

Here are 3 more tips, let us know what you think!

📖 1. Use NotebookLM's New "Interactive Mode" to Interview Your Documents

The problem: You have PDFs, articles, or reports you need to understand, but reading 50 pages feels overwhelming and you learn better through conversation.

The solution: Google's NotebookLM just added Interactive Mode (rolled out January 2026), which lets you interrupt the AI podcast hosts mid-conversation and ask them questions in real-time.

2-minute setup:

  1. Go to notebooklm.google.com
  2. Create a notebook and upload your documents (up to 50 sources)
  3. Click "Generate" under Audio Overview
  4. NEW: Click "Join Conversation" to activate Interactive Mode
  5. Interrupt the hosts and ask questions as they discuss your content

What you get: Two AI hosts discussing your documents like a podcast, but now you can jump in and say "Wait, explain that chart on page 12" or "How does this compare to the other study?"

Real example: Upload your company’s Q4 reports. The hosts start discussing revenue trends. You interrupt: "Hold on, why did European sales drop?" They pivot immediately to analyze that specific section across all uploaded documents.

The trick: The hosts maintain context across interruptions. You can have a 20-minute back-and-forth where they reference earlier points in the conversation. It’s like having two expert analysts who’ve read everything.

When to use it:

  • Researching complex topics across multiple sources
  • Preparing for meetings by "interviewing" briefing materials
  • Studying for exams with textbooks and lecture notes
  • Understanding contracts or legal documents in plain English

Tools:NotebookLM (free with Google account)

🧠 2. Use Perplexity's "Spaces" to Build Your Own Custom AI Research Assistant

The problem: You’re researching the same topic repeatedly (planning a home renovation, tracking competitors, following medical research) and you're tired of re-explaining context every time.

The solution: Perplexity Spaces (updated December 2025) lets you create topic-specific AI assistants with custom instructions, file uploads, and conversation memory that persists.

5-minute setup:

  1. Go to perplexity.ai and create an account
  2. Click "Spaces" in the sidebar → "Create Space"
  3. Add custom instructions: "You’re helping me plan a kitchen remodel. Budget: $30k. Style: modern farmhouse. Always check local building codes for Seattle."
  4. Upload relevant files (quotes, floor plans, inspiration photos)
  5. Now every search in this Space has that context automatically

What you get: An AI that remembers your project details, references your uploaded files, and doesn't make you repeat yourself across sessions.

Real example: Create a "Competitor Analysis" Space for your startup. Upload competitor websites, pricing sheets, and product docs. Every time you ask “What’s their positioning strategy?” or “How do they handle onboarding?” Perplexity searches with full context of everything you've uploaded.

The trick: Spaces are searchable across sessions. Come back three weeks later and your AI still remembers your budget, constraints, and previous decisions. It’s like having a research assistant with perfect memory.

Advanced features:

  • Pro Collections: Organize multiple Spaces by project
  • Shared Spaces: Collaborate with team members who can all query the same knowledge base
  • Source pinning: Tell Perplexity to prioritize specific websites in its searches

When to use it:

  • Long-term research projects (thesis, book research)
  • Business intelligence and competitor tracking
  • Home improvement planning
  • Medical condition research and treatment tracking

Tools:Perplexity (free basic, Pro: $20/month for unlimited searches)

🎨 3. Use ChatGPT's New "Canvas Collaboration Mode" to Build Interactive Tools (No Coding)

The problem: You need a custom calculator, tracker, or interactive tool for work, but hiring a developer costs thousands and you don’t code.

The solution: ChatGPT Canvas just added "Collaboration Mode" (January 2026 update) that lets non-technical people build functional web apps through conversation.

5-minute setup:

  1. Open ChatGPT (Plus or Team required)
  2. Select "GPT-4o with Canvas" from model picker
  3. Say: “Open a canvas and build me [describe your tool]”
  4. ChatGPT creates it in real-time → you see the live preview instantly
  5. Iterate: “Add a reset button” or “Make the colors match our brand”

What you get: A working interactive tool you can use immediately in the browser, or export as HTML to embed anywhere.

Real examples people are building:

  • ROI Calculator: “Build a calculator where I input monthly ad spend and conversion rate, and it shows projected annual revenue with a slider”
  • Project Timeline: “Create a Gantt chart where I can drag tasks and it auto-updates dependencies”
  • Budget Tracker: “Make a form where I log expenses by category and it shows a pie chart of spending”

The trick: Canvas sees your changes in real-time and can modify specific parts without rebuilding everything. Say “make just the header blue” and it updates that element only.

Advanced move: Export the code and give it to a developer to integrate into your actual product. Many teams use Canvas to prototype features before committing engineering resources.

When to use it:

  • Building internal tools for your team
  • Creating custom calculators for clients
  • Prototyping app ideas before hiring developers
  • Making interactive presentations or demos

What makes this different: Unlike regular ChatGPT code generation, Canvas shows you the working tool as you build it. You’re not copying code blocks—you’re having a conversation while watching your tool come to life.

Tools:ChatGPT Canvas (Plus: $20/month, Team: $30/user/month)

A quick note before you go

Thanks for reading this week’s Brain Bytes — I hope something here helped you move faster or think better.

How’d this one land?

See you next week, — Oliver

Oliver