Going bush? Stick an LLM in your van.
How my Volkswagen Multivan Became My Personal Knowledge Engine
Welcome to AI Field Notes by Move 37, an AI product studio working with ambitious organisations. Find out more about our work here.
In strategy, some of the most valuable insights often come from experiments at the margins. While a lot of discussions about AI focus on access to the latest releases from the major global labs, I've been testing something different: what happens, and how does it feel, when you decouple AI from connectivity entirely and take it on the road. And then try to talk to it.
This is has been my experimental setup for the past few weeks : A powerful Mac housed in a rugged Pelican case secured to the floor and charging from the supplementary battery that sits behind the passenger seat of my 2017 Volkswagen Multivan. The Mac is running Ollama, loaded with various open-source language models and a pretty hokey voice assistant in a pipeline that pulls it all together.
I don’t code (I obviously “vibe code”) so the setup is straightforward : the MacBook connects to the van's audio system via Bluetooth, with basic speech-to-text and text-to-speech interfaces handling voice interaction.
This isn't about practicality (the voice interfaces remain suboptimal, read “a bit shitty”) but it’s not hard to project forward a month or two to see where this will be.
In recent times a lot of us working in the space have become increasingly uncomfortable with the concentration of compute and capability in a tiny bunch of generalised models and the companies that have created them. We’ve done countless workshops in in the past few years where we’ve hypothesised that nation states (including Australia - see Kangaroo LLM), large corporations and other organisations might develop their own foundational models. This was more of a hopeful prediction than a techno-futurist one. Turns out that economics of that kind of thing are far beyond all but three to four companies in the world, for now.
But then recently DeepSeek's release of their R1 model demonstrated that open-source models are approaching performance parity with proprietary systems like GPT-4. This suggests we might finally see that locally-deployed AI can deliver capabilities previously assumed to require cloud infrastructure. This isn’t the place to debate how DeepSeek put it together, whether they’re a hedge fund shorting Nvidia or if the inability of the deployed DeepSeek Assistant app (different to the model) in the app stores should or shouldn’t talk to about Tiananmen Square. The reality is that for me, it was a hopeful sign that the landscape could be more competitive than we’d previously thought (I just remembered that time an Australian VC asked Pan and I “what’s the point of Open Source anything?”).
So with all this new #vanlife capability now functional, I’ve obviously been imagining remote wilderness situations where I might need to rapidly get up to acquire new knowledge (note: apologies to Move 37 team members who had to listen to me test this in the office by saying “teach me how to fish”)
Infrastructure failures following natural disasters and (for a touch of the fantastical) zombie apocalypse scenarios that test the system's ability, provide both practical guidance and creative problem-solving. These thought experiments pointed me towards what is perhaps a significant advantage of edge AI: resilience.
In critical applications from healthcare to disaster response to remote operations, systems that can deliver intelligence without connectivity potentially demonstrate an increased reliability. Obviously this would need to be intelligence that is baked into localised models, datasets and knowledge bases (offline, remember) - but it’s not difficult to see how that can all come together.
The current cloud-centric AI paradigm has created a pretty fragile dependency chain. When connectivity fails, intelligence disappears exactly when it might be most needed. Of course my local computer needs power too, so the jury’s still out on which paradigm has the best in-built ability to cope with different failure modes.
My Van LLM experiment suggests we're entering the early stages of a new distributed AI phase driven by three converging factors:
* Hardware advances: Devices like the M4 MacBook Pro (or very cool new Framework PCs) deliver computational power that would have been unimaginable in consumer hardware five years ago.
* Model efficiency: Open-source LLMs are becoming simultaneously more capable and less resource-intensive.
* Interface improvement: While still imperfect, speech interfaces are approaching usability thresholds for most applications and most situations (That said, the background noise from the zombies trying to get through your air vents is likely to mess with Coqui-ai or pyttsx3)
For businesses, all of this suggests a near future where critical AI capabilities could be deployed without creating new dependencies or exposing sensitive data to third parties. The compliance and security implications warrant real serious strategic consideration.
My Volkswagen Multivan LLM experiment started as a provocation—a way to test assumptions about what's possible with today's technology - and what’s not. Yet.
The little computer behind my passenger seat, capable of sophisticated reasoning without connectivity, feels like an early indicator of a coming type of intelligence that becomes increasingly distributed and sovereign—creating new possibilities for resilience, privacy, and independence from centralised control and policies.
Van LLM : Stick it in your big car.