Builder Notes
Working notes from the craft — not thought leadership theatre.
The August Countdown
Five months until the EU AI Act's high-risk provisions take full effect. Not a summary of the Act — a specific breakdown of what needs to be in place by August 2, 2026.
Multilingual Models Are Not Multicultural Models
The latest AI models speak 95 languages. They understand approximately zero cultures. The gap between language fluency and cultural competence is the gap that determines whether your AI tool works or merely translates.
The Alignment Problem Is Human
Brian Christian wrote about aligning AI with human values. The harder problem is that humans can't articulate their own values clearly enough for a machine to follow.
The €500,000 Mistake
A Hamburg company got fined half a million euros for automated decision-making without meaningful human oversight. What 'human in the loop' actually means — technically, not legally.
The Arabic Calligraphy Problem
Arabic is not 'right-to-left Latin.' It is a fundamentally different typographic system — and the gap between what AI interfaces render and what Arabic readers expect is a measure of cultural illiteracy.
System 1 Meets the Chatbot
Daniel Kahneman's two-system framework applied to the moment a person opens an AI tool. The first judgment happens in two seconds. Most onboarding fails before it starts.
The Regulatory Sandbox Nobody Uses
Every EU member state must launch an AI regulatory sandbox by August 2026. Most SMEs have never heard of them. That is a structural advantage waiting to be claimed.
The Default Is Not Neutral
Every default is a decision. Every decision reflects a culture. When an AI tool ships with defaults, it ships with a worldview — the question is whether that worldview was chosen or inherited.
Frictionless Is Not Meaningful
We designed the onboarding to be frictionless. People completed it in eleven minutes and never came back. The problem was not the friction — it was the absence of it.
The Model Card Nobody Reads
Every major AI model ships with a model card — the most honest document about what the model can and cannot do. Almost nobody reads them.
High-Context AI in a Low-Context Interface
Edward Hall divided cultures into high-context and low-context. Every chatbot is a low-context interface. In Japan, that collision produces distrust.
The Incentive Nobody Audits
Every company has stated values. Every company has an incentive system. These two things almost never align — and the gap is the best predictor of AI adoption failure.
Twenty Percent
The headline says 20% of EU enterprises have adopted AI. The microdata tells a different story — and the gap between the headline and the data is where the actual problem lives.
Hofstede Measured Six Dimensions. AI Measures Zero.
Geert Hofstede spent forty years measuring how cultures differ. Every AI tool on the market measures zero of those differences.
Psychological Safety and the AI Question
Amy Edmondson's framework meets the chatbot. When asking the machine reveals what you don't know, the room becomes the problem — not the machine.
The Checklist Will Not Save You
Companies love checklists for cultural adaptation. Translate the UI. Adjust the date format. Localise the currency. The checklist is complete. The product is still culturally incompetent.
Your Data Is Not Their Platform
Every company building AI capability on a rented platform is building on land it doesn't own. Data sovereignty is not a philosophical position — it is an architectural decision.
Cortisol Doesn't Care About Your Roadmap
Your AI implementation roadmap assumes a team operating at full cognitive capacity. Robert Sapolsky has bad news about that assumption.
Three Assumptions, Three Billion People
The Latin alphabet assumes horizontal reading, left-to-right, with spaces between words. Three assumptions. Three billion people for whom none of them hold.
The Tool Your Team Won't Use
You bought the tool. You trained the team. Nobody uses it. This is not a technology failure — it is a trust failure.
Real Artists Ship
Steve Jobs said it in 1983. Forty-two years later, it remains the most important sentence in product development — and the one most AI projects ignore.